A spectral approach for discrete dislocation dynamics simulations of nanoindentation
Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei
2018-07-01
We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Simulating high-frequency seismograms in complicated media: A spectral approach
International Nuclear Information System (INIS)
Orrey, J.L.; Archambeau, C.B.
1993-01-01
The main attraction of using a spectral method instead of a conventional finite difference or finite element technique for full-wavefield forward modeling in elastic media is the increased accuracy of a spectral approximation. While a finite difference method accurate to second order typically requires 8 to 10 computational grid points to resolve the smallest wavelengths on a 1-D grid, a spectral method that approximates the wavefield by trignometric functions theoretically requires only 2 grid points per minimum wavelength and produces no numerical dispersion from the spatial discretization. The resultant savings in computer memory, which is very significant in 2 and 3 dimensions, allows for larger scale and/or higher frequency simulations
Spectral Methods in Numerical Plasma Simulation
DEFF Research Database (Denmark)
Coutsias, E.A.; Hansen, F.R.; Huld, T.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...
[Modeling and Simulation of Spectral Polarimetric BRDF].
Ling, Jin-jiang; Li, Gang; Zhang, Ren-bin; Tang, Qian; Ye, Qiu
2016-01-01
Under the conditions of the polarized light, The reflective surface of the object is affected by many factors, refractive index, surface roughness, and so the angle of incidence. For the rough surface in the different wavelengths of light exhibit different reflection characteristics of polarization, a spectral polarimetric BRDF based on Kirchhof theory is proposee. The spectral model of complex refraction index is combined with refraction index and extinction coefficient spectral model which were got by using the known complex refraction index at different value. Then get the spectral model of surface roughness derived from the classical surface roughness measuring method combined with the Fresnel reflection function. Take the spectral model of refraction index and roughness into the BRDF model, then the spectral polarimetirc BRDF model is proposed. Compare the simulation results of the refractive index varies with wavelength, roughness is constant, the refraction index and roughness both vary with wavelength and origin model with other papers, it shows that, the spectral polarimetric BRDF model can show the polarization characteristics of the surface accurately, and can provide a reliable basis for the application of polarization remote sensing, and other aspects of the classification of substances.
Spectral methods in numerical plasma simulation
International Nuclear Information System (INIS)
Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)
Constellation modulation - an approach to increase spectral efficiency.
Dash, Soumya Sunder; Pythoud, Frederic; Hillerkuss, David; Baeuerle, Benedikt; Josten, Arne; Leuchtmann, Pascal; Leuthold, Juerg
2017-07-10
Constellation modulation (CM) is introduced as a new degree of freedom to increase the spectral efficiency and to further approach the Shannon limit. Constellation modulation is the art of encoding information not only in the symbols within a constellation but also by encoding information by selecting a constellation from a set of constellations that are switched from time to time. The set of constellations is not limited to sets of partitions from a given constellation but can e.g., be obtained from an existing constellation by applying geometrical transformations such as rotations, translations, scaling, or even more abstract transformations. The architecture of the transmitter and the receiver allows for constellation modulation to be used on top of existing modulations with little penalties on the bit-error ratio (BER) or on the required signal-to-noise ratio (SNR). The spectral bandwidth used by this modulation scheme is identical to the original modulation. Simulations demonstrate a particular advantage of the scheme for low SNR situations. So, for instance, it is demonstrated by simulation that a spectral efficiency increases by up to 33% and 20% can be obtained at a BER of 10 -3 and 2×10 -2 for a regular BPSK modulation format, respectively. Applying constellation modulation, we derive a most power efficient 4D-CM-BPSK modulation format that provides a spectral efficiency of 0.7 bit/s/Hz for an SNR of 0.2 dB at a BER of 2 × 10 -2 .
Fourier spectral simulations for wake fields in conducting cavities
International Nuclear Information System (INIS)
Min, M.; Chin, Y.-H.; Fischer, P.F.; Chae, Y.-Chul; Kim, K.-J.
2007-01-01
We investigate Fourier spectral time-domain simulations applied to wake field calculations in two-dimensional cylindrical structures. The scheme involves second-order explicit leap-frogging in time and Fourier spectral approximation in space, which is obtained from simply replacing the spatial differentiation operator of the YEE scheme by the Fourier differentiation operator on nonstaggered grids. This is a first step toward investigating high-order computational techniques with the Fourier spectral method, which is relatively simple to implement.
Prototype simulates remote sensing spectral measurements on fruits and vegetables
Hahn, Federico
1998-09-01
A prototype was designed to simulate spectral packinghouse measurements in order to simplify fruit and vegetable damage assessment. A computerized spectrometer is used together with lenses and an externally controlled illumination in order to have a remote sensing simulator. A laser is introduced between the spectrometer and the lenses in order to mark the zone where the measurement is being taken. This facilitates further correlation work and can assure that the physical and remote sensing measurements are taken in the same place. Tomato ripening and mango anthracnose spectral signatures are shown.
Enamel dose calculation by electron paramagnetic resonance spectral simulation technique
International Nuclear Information System (INIS)
Dong Guofu; Cong Jianbo; Guo Linchao; Ning Jing; Xian Hong; Wang Changzhen; Wu Ke
2011-01-01
Objective: To optimize the enamel electron paramagnetic resonance (EPR) spectral processing by using the EPR spectral simulation method to improve the accuracy of enamel EPR dosimetry and reduce artificial error. Methods: The multi-component superimposed EPR powder spectral simulation software was developed to simulate EPR spectrum models of the background signal (BS) and the radiation- induced signal (RS) of irradiated enamel respectively. RS was extracted from the multi-component superimposed spectrum of irradiated enamel and its amplitude was calculated. The dose-response curve was then established for calculating the doses of a group of enamel samples. The result of estimated dose was compared with that calculated by traditional method. Results: BS was simulated as a powder spectrum of gaussian line shape with the following spectrum parameters: g=2.00 35 and Hpp=0.65-1.1 mT, RS signal was also simulated as a powder spectrum but with axi-symmetric spectrum characteristics. The spectrum parameters of RS were: g ⊥ =2.0018, g ‖ =1.9965, Hpp=0.335-0.4 mT. The amplitude of RS had a linear response to radiation dose with the regression equation as y=240.74x + 76 724 (R 2 =0.9947). The expectation of relative error of dose estimation was 0.13. Conclusions: EPR simulation method has improved somehow the accuracy and reliability of enamel EPR dose estimation. (authors)
Order and correlations in genomic DNA sequences. The spectral approach
International Nuclear Information System (INIS)
Lobzin, Vasilii V; Chechetkin, Vladimir R
2000-01-01
The structural analysis of genomic DNA sequences is discussed in the framework of the spectral approach, which is sufficiently universal due to the reciprocal correspondence and mutual complementarity of Fourier transform length scales. The spectral characteristics of random sequences of the same nucleotide composition possess the property of self-averaging for relatively short sequences of length M≥100-300. Comparison with the characteristics of random sequences determines the statistical significance of the structural features observed. Apart from traditional applications to the search for hidden periodicities, spectral methods are also efficient in studying mutual correlations in DNA sequences. By combining spectra for structure factors and correlation functions, not only integral correlations can be estimated but also their origin identified. Using the structural spectral entropy approach, the regularity of a sequence can be quantitatively assessed. A brief introduction to the problem is also presented and other major methods of DNA sequence analysis described. (reviews of topical problems)
A Bayesian approach to spectral quantitative photoacoustic tomography
International Nuclear Information System (INIS)
Pulkkinen, A; Kaipio, J P; Tarvainen, T; Cox, B T; Arridge, S R
2014-01-01
A Bayesian approach to the optical reconstruction problem associated with spectral quantitative photoacoustic tomography is presented. The approach is derived for commonly used spectral tissue models of optical absorption and scattering: the absorption is described as a weighted sum of absorption spectra of known chromophores (spatially dependent chromophore concentrations), while the scattering is described using Mie scattering theory, with the proportionality constant and spectral power law parameter both spatially-dependent. It is validated using two-dimensional test problems composed of three biologically relevant chromophores: fat, oxygenated blood and deoxygenated blood. Using this approach it is possible to estimate the Grüneisen parameter, the absolute chromophore concentrations, and the Mie scattering parameters associated with spectral photoacoustic tomography problems. In addition, the direct estimation of the spectral parameters is compared to estimates obtained by fitting the spectral parameters to estimates of absorption, scattering and Grüneisen parameter at the investigated wavelengths. It is shown with numerical examples that the direct estimation results in better accuracy of the estimated parameters. (papers)
Sensitive detection of aerosol effect on simulated IASI spectral radiance
International Nuclear Information System (INIS)
Quan, X.; Huang, H.-L.; Zhang, L.; Weisz, E.; Cao, X.
2013-01-01
Guided by radiative transfer modeling of the effects of dust (aerosol) on satellite thermal infrared radiance by many different imaging radiometers, in this article, we present the aerosol-effected satellite radiative signal changes in the top of atmosphere (TOA). The simulation of TOA radiance for Infrared Atmospheric Sounding Interferometer (IASI) is performed by using the RTTOV fast radiative transfer model. The model computation is carried out with setting representative geographical atmospheric models and typical default aerosol climatological models under clear sky condition. The radiative differences (in units of equivalent black body brightness temperature differences (BTDs)) between simulated radiances without consideration of the impact of aerosol (Aerosol-free) and with various aerosol models (Aerosol-modified) are calculated for the whole IASI spectrum between 3.62 and 15.5 μm. The comparisons of BTDs are performed through 11 aerosol models in 5 classified atmospheric models. The results show that the Desert aerosol model has the most significant impact on IASI spectral simulated radiances than the other aerosol models (Continental, Urban, Maritime types and so on) in Mid-latitude Summer, contributing to the mineral aerosol components contained. The value of BTDs could reach up to 1 K at peak points. The atmospheric window spectral region between 900 and 1100 cm −1 (9.09–11.11 μm) is concentrated after the investigation for the largest values of aerosol-affected radiance differences. BTDs in IASI spectral region between 645 and 1200 cm −1 occupies the largest oscillation and the major part of the whole spectrum. The IASI highest window peak-points channels (such as 9.4 and 10.2 μm) are obtained finally, which are the most sensitive ones to the simulated IASI radiance. -- Highlights: ► Sensitive study of aerosol effect on simulated IASI spectral radiance is performed. ► The aerosol components have influenced IASI spectral regions
A singular-value decomposition approach to X-ray spectral estimation from attenuation data
International Nuclear Information System (INIS)
Tominaga, Shoji
1986-01-01
A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)
Spectrally balanced chromatic landing approach lighting system
Chase, W. D. (Inventor)
1981-01-01
Red warning lights delineate the runway approach with additional blue lights juxtaposed with the red lights such that the red lights are chromatically balanced. The red/blue point light sources result in the phenomenon that the red lights appear in front of the blue lights with about one and one-half times the diameter of the blue. To a pilot observing these lights along a glide path, those red lights directly below appear to be nearer than the blue lights. For those lights farther away seen in perspective at oblique angles, the red lights appear to be in a position closer to the pilot and hence appear to be above the corresponding blue lights. This produces a very pronounced three dimensional effect referred to as chromostereopsis which provides valuable visual cues to enable the pilot to perceive his actual position above the ground and the actual distance to the runway.
Approach to simulation effectiveness
CSIR Research Space (South Africa)
Goncalves, DPD
2006-07-01
Full Text Available ? The context and purpose of simulation are important in answering the question. If the simulation is viewed as a system, it follows that it has stakeholders and requirements originating from the creating system. An important result is that measures...
Simulated galaxy interactions as probes of merger spectral energy distributions
Energy Technology Data Exchange (ETDEWEB)
Lanz, Lauranne; Zezas, Andreas; Smith, Howard A.; Ashby, Matthew L. N.; Fazio, Giovanni G.; Hernquist, Lars [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Hayward, Christopher C. [Heidelberger Institut für Theoretische Studien, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg (Germany); Brassington, Nicola, E-mail: llanz@ipac.caltech.edu [School of Physics, Astronomy and Mathematics, University of Hertfordshire, College Lane, Hatfield, AL10 9AB (United Kingdom)
2014-04-10
We present the first systematic comparison of ultraviolet-millimeter spectral energy distributions (SEDs) of observed and simulated interacting galaxies. Our sample is drawn from the Spitzer Interacting Galaxy Survey and probes a range of galaxy interaction parameters. We use 31 galaxies in 14 systems which have been observed with Herschel, Spitzer, GALEX, and 2MASS. We create a suite of GADGET-3 hydrodynamic simulations of isolated and interacting galaxies with stellar masses comparable to those in our sample of interacting galaxies. Photometry for the simulated systems is then calculated with the SUNRISE radiative transfer code for comparison with the observed systems. For most of the observed systems, one or more of the simulated SEDs match reasonably well. The best matches recover the infrared luminosity and the star formation rate of the observed systems, and the more massive systems preferentially match SEDs from simulations of more massive galaxies. The most morphologically distorted systems in our sample are best matched to the simulated SEDs that are close to coalescence, while less evolved systems match well with the SEDs over a wide range of interaction stages, suggesting that an SED alone is insufficient for identifying the interaction stage except during the most active phases in strongly interacting systems. This result is supported by our finding that the SEDs calculated for simulated systems vary little over the interaction sequence.
A spectral unaveraged algorithm for free electron laser simulations
International Nuclear Information System (INIS)
Andriyash, I.A.; Lehe, R.; Malka, V.
2015-01-01
We propose and discuss a numerical method to model electromagnetic emission from the oscillating relativistic charged particles and its coherent amplification. The developed technique is well suited for free electron laser simulations, but it may also be useful for a wider range of physical problems involving resonant field–particles interactions. The algorithm integrates the unaveraged coupled equations for the particles and the electromagnetic fields in a discrete spectral domain. Using this algorithm, it is possible to perform full three-dimensional or axisymmetric simulations of short-wavelength amplification. In this paper we describe the method, its implementation, and we present examples of free electron laser simulations comparing the results with the ones provided by commonly known free electron laser codes
Gregg, Watson W.; Rousseaux, Cecile S.
2016-01-01
The importance of including directional and spectral light in simulations of ocean radiative transfer was investigated using a coupled biogeochemical-circulation-radiative model of the global oceans. The effort focused on phytoplankton abundances, nutrient concentrations and vertically-integrated net primary production. The importance was approached by sequentially removing directional (i.e., direct vs. diffuse) and spectral irradiance and comparing results of the above variables to a fully directionally and spectrally-resolved model. In each case the total irradiance was kept constant; it was only the pathways and spectral nature that were changed. Assuming all irradiance was diffuse had negligible effect on global ocean primary production. Global nitrate and total chlorophyll concentrations declined by about 20% each. The largest changes occurred in the tropics and sub-tropics rather than the high latitudes, where most of the irradiance is already diffuse. Disregarding spectral irradiance had effects that depended upon the choice of attenuation wavelength. The wavelength closest to the spectrally-resolved model, 500 nm, produced lower nitrate (19%) and chlorophyll (8%) and higher primary production (2%) than the spectral model. Phytoplankton relative abundances were very sensitive to the choice of non-spectral wavelength transmittance. The combined effects of neglecting both directional and spectral irradiance exacerbated the differences, despite using attenuation at 500 nm. Global nitrate decreased 33% and chlorophyll decreased 24%. Changes in phytoplankton community structure were considerable, representing a change from chlorophytes to cyanobacteria and coccolithophores. This suggested a shift in community function, from light-limitation to nutrient limitation: lower demands for nutrients from cyanobacteria and coccolithophores favored them over the more nutrient-demanding chlorophytes. Although diatoms have the highest nutrient demands in the model, their
Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter
2016-01-01
Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information
Directory of Open Access Journals (Sweden)
Eric Lowet
Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization
An Objective Approach to Identify Spectral Distinctiveness for Hearing Impairment
Directory of Open Access Journals (Sweden)
Yeou-Jiunn Chen
2013-01-01
Full Text Available To facilitate the process of developing speech perception, speech-language pathologists have to teach a subject with hearing loss the differences between two syllables by manually enhancing acoustic cues of speech. However, this process is time consuming and difficult. Thus, this study proposes an objective approach to automatically identify the regions of spectral distinctiveness between two syllables, which is used for speech-perception training. To accurately represent the characteristics of speech, mel-frequency cepstrum coefficients are selected as analytical parameters. The mismatch between two syllables in time domain is handled by dynamic time warping. Further, a filter bank is adopted to estimate the components in different frequency bands, which are also represented as mel-frequency cepstrum coefficients. The spectral distinctiveness in different frequency bands is then easily estimated by using Euclidean metrics. Finally, a morphological gradient operator is applied to automatically identify the regions of spectral distinctiveness. To evaluate the proposed approach, the identified regions are manipulated and then the manipulated syllables are measured by a close-set based speech-perception test. The experimental results demonstrated that the identified regions of spectral distinctiveness are very useful in speech perception, which indeed can help speech-language pathologists in speech-perception training.
Bautista, Pinky A; Yagi, Yukako
2012-05-01
Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.
Blood velocity estimation using ultrasound and spectral iterative adaptive approaches
DEFF Research Database (Denmark)
Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2011-01-01
-mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
A multimodal spectral approach to characterize rhythm in natural speech.
Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta
2016-01-01
Human utterances demonstrate temporal patterning, also referred to as rhythm. While simple oromotor behaviors (e.g., chewing) feature a salient periodical structure, conversational speech displays a time-varying quasi-rhythmic pattern. Quantification of periodicity in speech is challenging. Unimodal spectral approaches have highlighted rhythmic aspects of speech. However, speech is a complex multimodal phenomenon that arises from the interplay of articulatory, respiratory, and vocal systems. The present study addressed the question of whether a multimodal spectral approach, in the form of coherence analysis between electromyographic (EMG) and acoustic signals, would allow one to characterize rhythm in natural speech more efficiently than a unimodal analysis. The main experimental task consisted of speech production at three speaking rates; a simple oromotor task served as control. The EMG-acoustic coherence emerged as a sensitive means of tracking speech rhythm, whereas spectral analysis of either EMG or acoustic amplitude envelope alone was less informative. Coherence metrics seem to distinguish and highlight rhythmic structure in natural speech.
Spectral Subtraction Approach for Interference Reduction of MIMO Channel Wireless Systems
Directory of Open Access Journals (Sweden)
Tomohiro Ono
2005-08-01
Full Text Available In this paper, a generalized spectral subtraction approach for reducing additive impulsive noise, narrowband signals, white Gaussian noise and DS-CDMA interferences in MIMO channel DS-CDMA wireless communication systems is investigated. The interference noise reduction or suppression is essential problem in wireless mobile communication systems to improve the quality of communication. The spectrum subtraction scheme is applied to the interference noise reduction problems for noisy MIMO channel systems. The interferences in space and time domain signals can effectively be suppressed by selecting threshold values, and the computational load with the FFT is not large. Further, the fading effects of channel are compensated by spectral modification with the spectral subtraction process. In the simulations, the effectiveness of the proposed methods for the MIMO channel DS-CDMA is shown to compare with the conventional MIMO channel DS-CDMA.
Energy Technology Data Exchange (ETDEWEB)
Seaman, C.H.
1981-01-15
A general expression has been derived to enable calculation of the calibration error resulting from simulator-solar AMX spectral mismatch and from reference cell-test cell spectral mismatch. The information required includes the relative spectral response of the reference cell, the relative spectral response of the cell under test, and the relative spectral irradiance of the simulator (over the spectral range defined by cell response). The spectral irradiance of the solar AMX is assumed to be known.
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
Spectral optimization simulation of white light based on the photopic eye-sensitivity curve
Energy Technology Data Exchange (ETDEWEB)
Dai, Qi, E-mail: qidai@tongji.edu.cn [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Institute for Advanced Study, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China); Hao, Luoxi; Lin, Yi; Cui, Zhe [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China)
2016-02-07
Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions.
Spectral optimization simulation of white light based on the photopic eye-sensitivity curve
International Nuclear Information System (INIS)
Dai, Qi; Hao, Luoxi; Lin, Yi; Cui, Zhe
2016-01-01
Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions
Cai, Yaomin; Guo, Zhixiong
2018-04-20
The Monte Carlo model was developed to simulate the collimated solar irradiation transfer and energy harvest in a hollow louver made of silica glass and filled with water. The full solar spectrum from the air mass 1.5 database was adopted and divided into various discrete bands for spectral calculations. The band-averaged spectral properties for the silica glass and water were obtained. Ray tracing was employed to find the solar energy harvested by the louver. Computational efficiency and accuracy were examined through intensive comparisons of different band partition approaches, various photon numbers, and element divisions. The influence of irradiation direction on the solar energy harvest efficiency was scrutinized. It was found that within a 15° polar angle of incidence, the harvested solar energy in the louver was high, and the total absorption efficiency reached 61.2% under normal incidence for the current louver geometry.
Local and Global Gestalt Laws: A Neurally Based Spectral Approach.
Favali, Marta; Citti, Giovanna; Sarti, Alessandro
2017-02-01
This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels. This analysis performs grouping of local features and individuates perceptual units with the highest salience. Numerical simulations are performed, and results are obtained by applying the technique to a number of stimuli.
Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators
Curtis, H. B.
1976-01-01
Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.
Road simulation for four-wheel vehicle whole input power spectral density
Wang, Jiangbo; Qiang, Baomin
2017-05-01
As the vibration of running vehicle mainly comes from road and influence vehicle ride performance. So the road roughness power spectral density simulation has great significance to analyze automobile suspension vibration system parameters and evaluate ride comfort. Firstly, this paper based on the mathematical model of road roughness power spectral density, established the integral white noise road random method. Then in the MATLAB/Simulink environment, according to the research method of automobile suspension frame from simple two degree of freedom single-wheel vehicle model to complex multiple degrees of freedom vehicle model, this paper built the simple single incentive input simulation model. Finally the spectrum matrix was used to build whole vehicle incentive input simulation model. This simulation method based on reliable and accurate mathematical theory and can be applied to the random road simulation of any specified spectral which provides pavement incentive model and foundation to vehicle ride performance research and vibration simulation.
A spectral X-ray CT simulation study for quantitative determination of iron
Su, Ting; Kaftandjian, Valérie; Duvauchelle, Philippe; Zhu, Yuemin
2018-06-01
Iron is an essential element in the human body and disorders in iron such as iron deficiency or overload can cause serious diseases. This paper aims to explore the ability of spectral X-ray CT to quantitatively separate iron from calcium and potassium and to investigate the influence of different acquisition parameters on material decomposition performance. We simulated spectral X-ray CT imaging of a PMMA phantom filled with iron, calcium, and potassium solutions at various concentrations (15-200 mg/cc). Different acquisition parameters were considered, such as the number of energy bins (6, 10, 15, 20, 30, 60) and exposure factor per projection (0.025, 0.1, 1, 10, 100 mA s). Based on the simulation data, we investigated the performance of two regularized material decomposition approaches: projection domain method and image domain method. It was found that the former method discriminated iron from calcium, potassium and water in all cases and tended to benefit from lower number of energy bins for lower exposure factor acquisition. The latter method succeeded in iron determination only when the number of energy bins equals 60, and in this case, the contrast-to-noise ratios of the decomposed iron images are higher than those obtained using the projection domain method. The results demonstrate that both methods are able to discriminate and quantify iron from calcium, potassium and water under certain conditions. Their performances vary with the acquisition parameters of spectral CT. One can use one method or the other to benefit better performance according to the data available.
ANALYSIS OF SPECTRAL CHARACTERISTICS AMONG DIFFERENT SENSORS BY USE OF SIMULATED RS IMAGES
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This research, by use of RS image-simulating method, simulated apparent reflectance images at sensor level and ground-reflectance images of SPOT-HRV,CBERS-CCD,Landsat-TM and NOAA14-AVHRR' s corresponding bands. These images were used to analyze sensor's differences caused by spectral sensitivity and atmospheric impacts. The differences were analyzed on Normalized Difference Vegetation Index(NDVI). The results showed that the differences of sensors' spectral characteristics cause changes of their NDVI and reflectance. When multiple sensors' data are applied to digital analysis, the error should be taken into account. Atmospheric effect makes NDVI smaller, and atn~pheric correction has the tendency of increasing NDVI values. The reflectance and their NDVIs of different sensors can be used to analyze the differences among sensor' s features. The spectral analysis method based on RS simulated images can provide a new way to design the spectral characteristics of new sensors.
Impact of spectral nudging on regional climate simulation over CORDEX East Asia using WRF
Tang, Jianping; Wang, Shuyu; Niu, Xiaorui; Hui, Pinhong; Zong, Peishu; Wang, Xueyuan
2017-04-01
In this study, the impact of the spectral nudging method on regional climate simulation over the Coordinated Regional Climate Downscaling Experiment East Asia (CORDEX-EA) region is investigated using the Weather Research and Forecasting model (WRF). Driven by the ERA-Interim reanalysis, five continuous simulations covering 1989-2007 are conducted by the WRF model, in which four runs adopt the interior spectral nudging with different wavenumbers, nudging variables and nudging coefficients. Model validation shows that WRF has the ability to simulate spatial distributions and temporal variations of the surface climate (air temperature and precipitation) over CORDEX-EA domain. Comparably the spectral nudging technique is effective in improving the model's skill in the following aspects: (1), the simulated biases and root mean square errors of annual mean temperature and precipitation are obviously reduced. The SN3-UVT (spectral nudging with wavenumber 3 in both zonal and meridional directions applied to U, V and T) and SN6 (spectral nudging with wavenumber 6 in both zonal and meridional directions applied to U and V) experiments give the best simulations for temperature and precipitation respectively. The inter-annual and seasonal variances produced by the SN experiments are also closer to the ERA-Interim observation. (2), the application of spectral nudging in WRF is helpful for simulating the extreme temperature and precipitation, and the SN3-UVT simulation shows a clear advantage over the other simulations in depicting both the spatial distributions and inter-annual variances of temperature and precipitation extremes. With the spectral nudging, WRF is able to preserve the variability in the large scale climate information, and therefore adjust the temperature and precipitation variabilities toward the observation.
Spectral mismatch and solar simulator quality factor in advanced LED solar simulators
Scherff, Maximilian L. D.; Nutter, Jason; Fuss-Kailuweit, Peter; Suthues, Jörn; Brammer, Torsten
2017-08-01
Solar cell simulators based on light emitting diodes (LED) have the potential to achieve a large potential market share in the next years. As advantages they can provide a short and long time stable spectrum, which fits very well to the global AM1.5g reference spectrum. This guarantees correct measurements during the flashes and throughout the light engines’ life span, respectively. Furthermore, a calibration with a solar cell type of different spectral response (SR) as well as the production of solar cells with varying SR in between two calibrations does not affect the correctness of the measurement result. A high quality 21 channel LED solar cell spectrum is compared to former study comprising a standard modified xenon spectrum light source. It is shown, that the spectrum of the 21-channel-LED light source performs best for all examined cases.
International Nuclear Information System (INIS)
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data. -- Highlights: ► Improved ADA is presented for calculating the extinction efficiency of spheroids. ► Selection principle about spectral extinction data is developed based on PCA. ► Improved Tikhonov iteration method is proposed to retrieve the spheroid PSD.
Calculation of isotope selective excitation of uranium isotopes using spectral simulation method
International Nuclear Information System (INIS)
Al-Hassanieh, O.
2009-06-01
Isotope ratio enhancement factor and isotope selectivity of 235 U in five excitation schemes (I: 0→10069 cm - 1 →IP, II: 0 →10081 cm - 1 →IP, III: 0 →25349 cm - 1→ IP, IV: 0→28650 cm - 1 →IP, V: 0→16900 cm - 1 →34659 cm - 1 →IP), were computed by a spectral simulation approach. The effect of laser bandwidth and Doppler width on the isotope ratio enhancement factor and isotope selectivity of 235 U has been studied. The photoionization scheme V gives the highest isotope ratio enhancement factor. The main factors which effect the separation possibility are the isotope shift and the relative intensity of the transitions between hyperfine levels. The isotope ratio enhancement factor decreases exponentially by increasing the Doppler width and the laser bandwidth, where the effect of Doppler width is much greater than the effect of the laser bandwidth. (author)
Parsani, Matteo
2011-09-01
The main goal of this paper is to develop an efficient numerical algorithm to compute the radiated far field noise provided by an unsteady flow field from bodies in arbitrary motion. The method computes a turbulent flow field in the near fields using a high-order spectral difference method coupled with large-eddy simulation approach. The unsteady equations are solved by advancing in time using a second-order backward difference formulae scheme. The nonlinear algebraic system arising from the time discretization is solved with the nonlinear lowerupper symmetric GaussSeidel algorithm. In the second step, the method calculates the far field sound pressure based on the acoustic source information provided by the first step simulation. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases. The test cases used are: a laminar flow over a two-dimensional (2D) open cavity at Re = 1.5 × 10 3 and M = 0.15 and a laminar flow past a 2D square cylinder at Re = 200 and M = 0.5. In order to show the application of the numerical method in industrial cases and to assess its capability for sound field simulation, a three-dimensional turbulent flow in a muffler at Re = 4.665 × 10 4 and M = 0.05 has been chosen as a third test case. The flow results show good agreement with numerical and experimental reference solutions. Comparison of the computed noise results with those of reference solutions also shows that the numerical approach predicts noise accurately. © 2011 IMACS.
Spectral Element Method for the Simulation of Unsteady Compressible Flows
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.
Numerical Simulations of Kinetic Alfvén Waves to Study Spectral ...
Indian Academy of Sciences (India)
Numerical Simulations of Kinetic Alfvén Waves to Study Spectral. Index in Solar Wind Turbulence and Particle Heating. R. P. Sharma. ∗. & H. D. Singh. Center for Energy Studies, Indian Institute of Technology, Delhi 110 016, India. ∗ e-mail: rpsharma@ces.iitd.ernet.in. Abstract. We present numerical simulations of the ...
Spectral unmixing of urban land cover using a generic library approach
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Pseudo-spectral 3D simulations of streamers
A. Luque (Alejandro); U. M. Ebert (Ute); C. Montijn (Carolynne-Sireeh); W. Hundsdorfer (Willem); J. Schmidt; M. Simek; S. Pekarek; V. Prukner
2007-01-01
textabstractA three-dimensional code for the simulation of streamers is introduced. The code is based on a fluid model for oxygen-nitrogen mixtures that includes drift, diffusion and attachement of electrons and creation of new charge carriers through impact ionization and photo-ionization. The
International Nuclear Information System (INIS)
Birge, Jonathan R.; Kaertner, Franz X.
2008-01-01
We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty
A brute-force spectral approach for wave estimation using measured vessel motions
DEFF Research Database (Denmark)
Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.
2018-01-01
, and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...
Effective approach to spectroscopy and spectral analysis techniques using Matlab
Li, Xiang; Lv, Yong
2017-08-01
With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching
A time-spectral approach to numerical weather prediction
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
Color film spectral properties test experiment for target simulation
Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji
2017-04-01
In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
International Nuclear Information System (INIS)
Zhang, Yan; Tang, Baoping; Chen, Rengxiang; Liu, Ziran
2016-01-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
Simulation and Analysis of Spectral Response Function and Bandwidth of Spectrometer
Directory of Open Access Journals (Sweden)
Zhenyu Gao
2016-01-01
Full Text Available A simulation method for acquiring spectrometer’s Spectral Response Function (SRF based on Huygens Point Spread Function (PSF is suggested. Taking into account the effects of optical aberrations and diffraction, the method can obtain the fine SRF curve and corresponding spectral bandwidth at any nominal wavelength as early as in the design phase. A prism monochromator is proposed for illustrating the simulation procedure. For comparison, a geometrical ray-tracing method is also provided, with bandwidth deviations varying from 5% at 250 nm to 25% at 2400 nm. Further comparison with reported experiments shows that the areas of the SRF profiles agree to about 1%. However, the weak scattered background light on the level of 10−4 to 10−5 observed by experiment could not be covered by this simulation. This simulation method is a useful tool for forecasting the performance of an underdesigned spectrometer.
Ching-Teng Lee; Ming-Chin Wu; Shyh-Chin Chen
2005-01-01
The National Centers for Environmental Prediction (NCEP) regional spectral model (RSM) version 97 was used to investigate the regional summertime climate over Taiwan and adjacent areas for June-July-August of 1990 through 2000. The simulated sea-level-pressure and wind fields of RSM1 with 50-km grid space are similar to the reanalysis, but the strength of the...
Impacts of spectral nudging on the simulation of present-day rainfall patterns over southern Africa
CSIR Research Space (South Africa)
Muthige, Mavhungu S
2016-10-01
Full Text Available on the simulation rainfall patterns in Southern Africa. We use the Conformal-Cubic Atmospheric Model (CCAM) as RCM to downscale ERA-interim reanalysis data to a resolution of 50 km in the horizontal over the globe. A scale-selective filter (spectral nudging...
Directory of Open Access Journals (Sweden)
Jinxing Liang
2016-01-01
Full Text Available The construction of spectral discoloration model, based on aging test and simulating degradation experiment, was proposed to detect the aging degree of red lead pigment in ancient murals and to reproduce the spectral data supporting digital restoration of the ancient murals. The degradation process of red lead pigment under the aging test conditions was revealed by X-ray diffraction, scanning electron microscopy, and spectrophotometer. The simulating degradation experiment was carried out by proportionally mixing red lead and lead dioxide with referring to the results of aging test. The experimental result indicated that the pure red lead was gradually turned into black lead dioxide, and the amount of tiny particles of the aging sample increased faced with aging process. Both the chroma and lightness of red lead pigment decreased with discoloration, and its hue essentially remains unchanged. In addition, the spectral reflectance curves of the aging samples almost started rising at about 550 nm with the inflection moving slightly from about 570 nm to 550 nm. The spectral reflectance of samples in long- and in short-wavelength regions was fitted well with the logarithmic and linear function. The spectral discoloration model was established, and the real aging red lead pigment in Dunhuang murals was measured and verified the effectiveness of the model.
Importance of Resolving the Spectral Support of Beam-plasma Instabilities in Simulations
Energy Technology Data Exchange (ETDEWEB)
Shalaby, Mohamad; Broderick, Avery E. [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Pfrommer, Christoph [Heidelberg Institute for Theoretical Studies, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg (Germany); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Puchwein, Ewald, E-mail: mshalaby@live.ca [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom)
2017-10-20
Many astrophysical plasmas are prone to beam-plasma instabilities. For relativistic and dilute beams, the spectral support of the beam-plasma instabilities is narrow, i.e., the linearly unstable modes that grow with rates comparable to the maximum growth rate occupy a narrow range of wavenumbers. This places stringent requirements on the box-sizes when simulating the evolution of the instabilities. We identify the implied lower limits on the box size imposed by the longitudinal beam plasma instability, i.e., typically the most stringent condition required to correctly capture the linear evolution of the instabilities in multidimensional simulations. We find that sizes many orders of magnitude larger than the resonant wavelength are typically required. Using one-dimensional particle-in-cell simulations, we show that the failure to sufficiently resolve the spectral support of the longitudinal instability yields slower growth and lower levels of saturation, potentially leading to erroneous physical conclusion.
Importance of Resolving the Spectral Support of Beam-plasma Instabilities in Simulations
International Nuclear Information System (INIS)
Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip; Pfrommer, Christoph; Lamberts, Astrid; Puchwein, Ewald
2017-01-01
Many astrophysical plasmas are prone to beam-plasma instabilities. For relativistic and dilute beams, the spectral support of the beam-plasma instabilities is narrow, i.e., the linearly unstable modes that grow with rates comparable to the maximum growth rate occupy a narrow range of wavenumbers. This places stringent requirements on the box-sizes when simulating the evolution of the instabilities. We identify the implied lower limits on the box size imposed by the longitudinal beam plasma instability, i.e., typically the most stringent condition required to correctly capture the linear evolution of the instabilities in multidimensional simulations. We find that sizes many orders of magnitude larger than the resonant wavelength are typically required. Using one-dimensional particle-in-cell simulations, we show that the failure to sufficiently resolve the spectral support of the longitudinal instability yields slower growth and lower levels of saturation, potentially leading to erroneous physical conclusion.
Spectrally-balanced chromatic approach-lighting system
Chase, W. D.
1977-01-01
Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.
Guided-wave approaches to spectrally selective energy absorption
Stegeman, G. I.; Burke, J. J.
1987-01-01
Results of experiments designed to demonstrate spectrally selective absorption in dielectric waveguides on semiconductor substrates are reported. These experiments were conducted with three waveguides formed by sputtering films of PSK2 glass onto silicon-oxide layers grown on silicon substrates. The three waveguide samples were studied at 633 and 532 nm. The samples differed only in the thickness of the silicon-oxide layer, specifically 256 nm, 506 nm, and 740 nm. Agreement between theoretical predictions and measurements of propagation constants (mode angles) of the six or seven modes supported by these samples was excellent. However, the loss measurements were inconclusive because of high scattering losses in the structures fabricated (in excess of 10 dB/cm). Theoretical calculations indicated that the power distribution among all the modes supported by these structures will reach its steady state value after a propagation length of only 1 mm. Accordingly, the measured loss rates were found to be almost independent of which mode was initially excited. The excellent agreement between theory and experiment leads to the conclusion that low loss waveguides confirm the predicted loss rates.
Electromagnetic microinstabilities in tokamak plasmas using a global spectral approach
Energy Technology Data Exchange (ETDEWEB)
Falchetto, G. L
2002-03-01
Electromagnetic microinstabilities in tokamak plasmas are studied by means of a linear global eigenvalue numerical code. The code is the electromagnetic extension of an existing electrostatic global gyrokinetic spectral toroidal code, called GLOGYSTO. Ion dynamics is described by the gyrokinetic equation, so that ion finite Larmor radius effects are taken into account to all orders. Non adiabatic electrons are included in the model, with passing particles described by the drift-kinetic equation and trapped particles through the bounce averaged drift-kinetic equation. A low frequency electromagnetic perturbation is applied to a low -but finite- {beta}plasma (where the parameter {beta} identifies the ratio of plasma pressure to magnetic pressure); thus, the parallel perturbations of the magnetic field are neglected. The system is closed by the quasi-neutrality equation and the parallel component of Ampere's law. The formulation is applied to a large aspect ratio toroidal configuration, with circular shifted surfaces. Such a simple configuration enables one to derive analytically the gyrocenter trajectories. The system is solved in Fourier space, taking advantage of a decomposition adapted to the toroidal geometry. The major contributions of this thesis are as follows. The electromagnetic effects on toroidal Ion Temperature Gradient driven (ITG) modes are studied. The stabilization of these modes with increasing {beta}, as predicted in previous work, is confirmed. The inclusion of trapped electron dynamics enables the study of its coupling to the ITG modes and of Trapped Electron Modes (TEM) .The effects of finite {beta} are considered together with those of different magnetic shear profiles and of the Shafranov shift. The threshold for the destabilization of an electromagnetic mode is identified. Moreover, the global formulation yields for the first time the radial structure of this so-called Alfvenic Ion Temperature Gradient (AITG) mode. The stability of the
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
International Nuclear Information System (INIS)
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-01-01
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods
A new approach to passivity preserving model reduction : the dominant spectral zero method
Ionutiu, R.; Rommes, J.; Antoulas, A.C.; Roos, J.; Costa, L.R.J.
2010-01-01
A new model reduction method for circuit simulation is presented, which preserves passivity by interpolating dominant spectral zeros. These are computed as poles of an associated Hamiltonian system, using an iterative solver: the subspace accelerated dominant pole algorithm (SADPA). Based on a
A sparse-mode spectral method for the simulation of turbulent flows
International Nuclear Information System (INIS)
Meneguzzi, M.; Politano, H.; Pouquet, A.; Zolver, M.
1996-01-01
We propose a new algorithm belonging to the family of the sparsemode spectral method to simulate turbulent flows. In this method the number of Fourier modes k increases with k more slowly than k D-1 in dimension D, while retaining the advantage of the fast Fourier transform. Examples of applications of the algorithm are given for the one-dimensional Burger's equation and two-dimensional incompressible MHD flows
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
specsim: A Fortran-77 program for conditional spectral simulation in 3D
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Spectral Bio-indicator Simulations for Tracking Photosynthetic Activities in a Corn Field
Cheng, Yen-Ben; Middleton, Elizabeth M.; Huemmrich, K. Fred; Zhang, Qingyuan; Corp, Lawrence; Campbell, Petya; Kustas, William
2011-01-01
Accurate assessment of vegetation canopy optical properties plays a critical role in monitoring natural and managed ecosystems under environmental changes. In this context, radiative transfer (RT) models simulating vegetation canopy reflectance have been demonstrated to be a powerful tool for understanding and estimating spectral bio-indicators. In this study, two narrow band spectroradiometers were utilized to acquire observations over corn canopies for two summers. These in situ spectral data were then used to validate a two-layer Markov chain-based canopy reflectance model for simulating the Photochemical Reflectance Index (PRI), which has been widely used in recent vegetation photosynthetic light use efficiency (LUE) studies. The in situ PRI derived from narrow band hyperspectral reflectance exhibited clear responses to: 1) viewing geometry which affects the asset of light environment; and 2) seasonal variation corresponding to the growth stage. The RT model (ACRM) successfully simulated the responses to the variable viewing geometry. The best simulations were obtained when the model was set to run in the two layer mode using the sunlit leaves as the upper layer and shaded leaves as the lower layer. Simulated PRI values yielded much better correlations to in situ observations when the cornfield was dominated by green foliage during the early growth, vegetative and reproductive stages (r = 0.78 to 0.86) than in the later senescent stage (r = 0.65). Further sensitivity analyses were conducted to show the important influences of leaf area index (LAI) and the sunlit/shaded ratio on PRI observations.
Energy Technology Data Exchange (ETDEWEB)
Choi, Yong Hee; Joo, Han Gyu, E-mail: joohan@snu.ac.kr
2013-10-15
Highlights: • A multiscale defect simulation system tailored for neutron damage estimation is introduced. • The new recoil spectrum code can use the most recent ENDF-B/VII nuclear data. • The high energy cascades are broken into subcascades using the INCAS model. • OKMC simulation provides data for shear stress estimation using dislocation dynamics formula. • Demonstration is made with a fusion blanket design having different spectral shifters. -- Abstract: A multiscale material defect simulation established to evaluate neutron induced damages on metals is applied to an estimation of material degradation in helium cooled molten lithium blankets in which four different spectral shifter materials are examined as a means of maximizing the tritium breeding ratio through proper shaping of the neutron spectrum. The multiscale system consists of a Monte Carlo neutron transport code, a recoil spectrum generation code, a molecular dynamics code, a high energy cascade breakup model, an object kinetic Monte Carlo code, and a simple formula as the shear stress estimator. The average recoil energy of the primary knock-on atoms, the total concentration of the defects, average defect sizes, and the increase in shear stress after a certain irradiation time are calculated for each spectral shifter. Among the four proposed materials of B4C, Be, Graphite and TiC, B4C reveals the best shielding performance in terms of neutron radiation hardening. The result for the increase in shear stress after 100 days of irradiation indicates that the increased shear stress is 1.5 GPa for B4C which is about 40% less than that of the worst one, the graphite spectral shifter. The other damage indicators show consistent trends.
Premixed Turbulent Flames and Spectral Approach Flammes turbulentes de prémélange Approche spectrale
Directory of Open Access Journals (Sweden)
Mathieu J.
2006-11-01
Full Text Available Scientific and technical approach concerning the behaviour of flames developing in a turbulent medium are related in many recent papers. On the whole the problem is very complex!The chemical reaction develops inside a turbulent flow which requires a double scaling. Characteristic times and characteristic lengths have to be defined for both flame and the turbulent fields. With a view to enlarging these comparisons a spectral analyses of the turbulent field is proposed. It is widely supported by previous experimental data. The flame can be acted upon by an external turbulent fields. That supposes the flame to be thicker that the smallest turbulent structures in connection with the Kolmogorov scale. With increasing Reynolds numbers turbulent structures penetrate the flame front, they can disturb the preheat zone or event the chemical zone. The passage of a flame front regime to the case of a chemical reaction developing in a volume is thereby emphasized. As the reaction rate is decreasing, the domain affected by the reaction is increased chemical reactions generate a segregation process whereas the chemical species are mixed by the turbulent motion. In the premixed combustion engine a large range of operating points can be defined. The diagram usually used is that of Barrere Borghi. Several modeling methods should probably be developed according to the positions of the operating points in the diagram. Modeling methods are not presented herein. However the existence of typical structures in connection with the architecture of the combustion chamber could be examined in subsequent paper. The flame front can be subjected to distorting effects due to isolated rolling or to a sequence of vortices. Previously this last case has been touched upon. Using a spectral approach no discremination has to be made as for the sizes of these rollings: that could lead to new modeling methods if restricted shapes of vortices are accepted. By using a spectral method
Reconstruction of solar spectral surface UV irradiances using radiative transfer simulations.
Lindfors, Anders; Heikkilä, Anu; Kaurola, Jussi; Koskela, Tapani; Lakkala, Kaisa
2009-01-01
UV radiation exerts several effects concerning life on Earth, and spectral information on the prevailing UV radiation conditions is needed in order to study each of these effects. In this paper, we present a method for reconstruction of solar spectral UV irradiances at the Earth's surface. The method, which is a further development of an earlier published method for reconstruction of erythemally weighted UV, relies on radiative transfer simulations, and takes as input (1) the effective cloud optical depth as inferred from pyranometer measurements of global radiation (300-3000 nm); (2) the total ozone column; (3) the surface albedo as estimated from measurements of snow depth; (4) the total water vapor column; and (5) the altitude of the location. Reconstructed daily cumulative spectral irradiances at Jokioinen and Sodankylä in Finland are, in general, in good agreement with measurements. The mean percentage difference, for instance, is mostly within +/-8%, and the root mean square of the percentage difference is around 10% or below for wavelengths over 310 nm and daily minimum solar zenith angles (SZA) less than 70 degrees . In this study, we used pseudospherical radiative transfer simulations, which were shown to improve the performance of our method under large SZA (low Sun).
Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J
2017-05-01
The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.
Simulating return signals of a spaceborne high-spectral resolution lidar channel at 532 nm
Xiao, Yu; Binglong, Chen; Min, Min; Xingying, Zhang; Lilin, Yao; Yiming, Zhao; Lidong, Wang; Fu, Wang; Xiaobo, Deng
2018-06-01
High spectral resolution lidar (HSRL) system employs a narrow spectral filter to separate the particulate (cloud/aerosol) and molecular scattering components in lidar return signals, which improves the quality of the retrieved cloud/aerosol optical properties. To better develop a future spaceborne HSRL system, a novel simulation technique was developed to simulate spaceborne HSRL return signals at 532 nm using the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/aerosol extinction coefficients product and numerical weather prediction data. For validating simulated data, a mathematical particulate extinction coefficient retrieval method for spaceborne HSRL return signals is described here. We compare particulate extinction coefficient profiles from the CALIPSO operational product with simulated spaceborne HSRL data. Further uncertainty analysis shows that relative uncertainties are acceptable for retrieving the optical properties of cloud and aerosol. The final results demonstrate that they agree well with each other. It indicates that the return signals of the spaceborne HSRL molecular channel at 532 nm will be suitable for developing operational algorithms supporting a future spaceborne HSRL system.
Simulating charge transport to understand the spectral response of Swept Charge Devices
Athiray, P. S.; Sreekumar, P.; Narendranath, S.; Gow, J. P. D.
2015-11-01
Context. Swept Charge Devices (SCD) are novel X-ray detectors optimized for improved spectral performance without any demand for active cooling. The Chandrayaan-1 X-ray Spectrometer (C1XS) experiment onboard the Chandrayaan-1 spacecraft used an array of SCDs to map the global surface elemental abundances on the Moon using the X-ray fluorescence (XRF) technique. The successful demonstration of SCDs in C1XS spurred an enhanced version of the spectrometer on Chandrayaan-2 using the next-generation SCD sensors. Aims: The objective of this paper is to demonstrate validation of a physical model developed to simulate X-ray photon interaction and charge transportation in a SCD. The model helps to understand and identify the origin of individual components that collectively contribute to the energy-dependent spectral response of the SCD. Furthermore, the model provides completeness to various calibration tasks, such as generating spectral matrices (RMFs - redistribution matrix files), estimating efficiency, optimizing event selection logic, and maximizing event recovery to improve photon-collection efficiency in SCDs. Methods: Charge generation and transportation in the SCD at different layers related to channel stops, field zones, and field-free zones due to photon interaction were computed using standard drift and diffusion equations. Charge collected in the buried channel due to photon interaction in different volumes of the detector was computed by assuming a Gaussian radial profile of the charge cloud. The collected charge was processed further to simulate both diagonal clocking read-out, which is a novel design exclusive for SCDs, and event selection logic to construct the energy spectrum. Results: We compare simulation results of the SCD CCD54 with measurements obtained during the ground calibration of C1XS and clearly demonstrate that our model reproduces all the major spectral features seen in calibration data. We also describe our understanding of interactions at
Simulation and Non-Simulation Based Human Reliability Analysis Approaches
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2014-12-01
Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.
A general spectral method for the numerical simulation of one-dimensional interacting fermions
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
Spectral indices of cardiovascular adaptations to short-term simulated microgravity exposure
Patwardhan, A. R.; Evans, J. M.; Berk, M.; Grande, K. J.; Charles, J. B.; Knapp, C. F.
1995-01-01
We investigated the effects of exposure to microgravity on the baseline autonomic balance in cardiovascular regulation using spectral analysis of cardiovascular variables measured during supine rest. Heart rate, arterial pressure, radial flow, thoracic fluid impedance and central venous pressure were recorded from nine volunteers before and after simulated microgravity, produced by 20 hours of 6 degrees head down bedrest plus furosemide. Spectral powers increased after simulated microgravity in the low frequency region (centered at about 0.03 Hz) in arterial pressure, heart rate and radial flow, and decreased in the respiratory frequency region (centered at about 0.25 Hz) in heart rate. Reduced heart rate power in the respiratory frequency region indicates reduced parasympathetic influence on the heart. A concurrent increase in the low frequency power in arterial pressure, heart rate, and radial flow indicates increased sympathetic influence. These results suggest that the baseline autonomic balance in cardiovascular regulation is shifted towards increased sympathetic and decreased parasympathetic influence after exposure to short-term simulated microgravity.
Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.
2017-10-01
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
International Nuclear Information System (INIS)
Majaron, B; Milanic, M
2010-01-01
Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.
An abstract approach to some spectral problems of direct sum differential operators
Directory of Open Access Journals (Sweden)
Maksim S. Sokolov
2003-07-01
Full Text Available In this paper, we study the common spectral properties of abstract self-adjoint direct sum operators, considered in a direct sum Hilbert space. Applications of such operators arise in the modelling of processes of multi-particle quantum mechanics, quantum field theory and, specifically, in multi-interval boundary problems of differential equations. We show that a direct sum operator does not depend in a straightforward manner on the separate operators involved. That is, on having a set of self-adjoint operators giving a direct sum operator, we show how the spectral representation for this operator depends on the spectral representations for the individual operators (the coordinate operators involved in forming this sum operator. In particular it is shown that this problem is not immediately solved by taking a direct sum of the spectral properties of the coordinate operators. Primarily, these results are to be applied to operators generated by a multi-interval quasi-differential system studied, in the earlier works of Ashurov, Everitt, Gesztezy, Kirsch, Markus and Zettl. The abstract approach in this paper indicates the need for further development of spectral theory for direct sum differential operators.
Multiscale simulation approach for battery production systems
Schönemann, Malte
2017-01-01
Addressing the challenge of improving battery quality while reducing high costs and environmental impacts of the production, this book presents a multiscale simulation approach for battery production systems along with a software environment and an application procedure. Battery systems are among the most important technologies of the 21st century since they are enablers for the market success of electric vehicles and stationary energy storage solutions. However, the performance of batteries so far has limited possible applications. Addressing this challenge requires an interdisciplinary understanding of dynamic cause-effect relationships between processes, equipment, materials, and environmental conditions. The approach in this book supports the integrated evaluation of improvement measures and is usable for different planning horizons. It is applied to an exemplary battery cell production and module assembly in order to demonstrate the effectiveness and potential benefits of the simulation.
Nonstationary signals phase-energy approach-theory and simulations
Klein, R; Braun, S; 10.1006/mssp.2001.1398
2001-01-01
Modern time-frequency methods are intended to deal with a variety of nonstationary signals. One specific class, prevalent in the area of rotating machines, is that of harmonic signals of varying frequencies and amplitude. This paper presents a new adaptive phase-energy (APE) approach for time-frequency representation of varying harmonic signals. It is based on the concept of phase (frequency) paths and the instantaneous power spectral density (PSD). It is this path which represents the dynamic behaviour of the system generating the observed signal. The proposed method utilises dynamic filters based on an extended Nyquist theorem, enabling extraction of signal components with optimal signal-to-noise ratio. The APE detects the most energetic harmonic components (frequency paths) in the analysed signal. Tests on simulated signals show the superiority of the APE in resolution and resolving power as compared to STFT and wavelets wave- packet decomposition. The dynamic filters also enable the reconstruction of the ...
Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method
International Nuclear Information System (INIS)
Zhang Xu; Tan Duowang
2009-01-01
A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)
Direct Numerical Simulation of the Rayleigh−Taylor Instability with the Spectral Element Method
International Nuclear Information System (INIS)
Xu, Zhang; Duo-Wang, Tan
2009-01-01
A novel method is proposed to simulate Rayleigh−Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier–Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh−Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh–Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh−Taylor instabilities of turbulent flows. (fundamental areas of phenomenology (including applications))
Detailed spectral simulations in support of PBFA-Z dynamic hohlraum Z-pinch experiments
International Nuclear Information System (INIS)
MacFarlane, J.J.; Wang, P.; Derzon, M.S.; Haill, A.; Nash, T.J.; Peterson, D.L.
1997-01-01
In PBFA-Z dynamic hohlraum Z-pinch experiments, 16--18 MA of current is delivered to a load comprises of a tungsten wire array surrounding a low-density cylindrical CH foam. The magnetic field accelerates the W plasma radially inward at velocities ∼ 40--60 cm/micros. The W plasma impacts into the foam, generating a high T R radiation field which diffuses into the foam. The authors are investigating several types of spectral diagnostics which can be used to characterize the time-dependent conditions in the foam. In addition, they are examining the potential ramifications of axial jetting on the interpretation of axial x-ray diagnostics. In the analysis, results from 2-D radiation-magnetohydrodynamics simulations are post-processed using a hybrid spectral analysis code in which low-Z material is treated using a detailed collisional-radiative atomic model, while high-Z material is modeled using LTE UTA (unresolved transition array) opacities. They will present results from recent simulations and discuss ramifications for x-ray diagnostics
Novel Simulation Approaches for Smart Grids
Directory of Open Access Journals (Sweden)
Eleftherios Tsampasis
2016-06-01
Full Text Available The complexity of the power grid, in conjunction with the ever increasing demand for electricity, creates the need for efficient analysis and control of the power system. The evolution of the legacy system towards the new smart grid intensifies this need due to the large number of sensors and actuators that must be monitored and controlled, the new types of distributed energy sources that need to be integrated and the new types of loads that must be supported. At the same time, integration of human-activity awareness into the smart grid is emerging and this will allow the system to monitor, share and manage information and actions on the business, as well as the real world. In this context, modeling and simulation is an invaluable tool for system behavior analysis, energy consumption estimation and future state prediction. In this paper, we review current smart grid simulators and approaches for building and user behavior modeling, and present a federated smart grid simulation framework, in which building, control and user behavior modeling and simulation are decoupled from power or network simulators and implemented as discrete components. This framework enables evaluation of the interactions between the communication infrastructure and the power system taking into account the human activities, which are at the focus of emerging energy-related applications that aim to shape user behavior. Validation of the key functionality of the proposed framework is also presented.
Hirose, Misa; Toyota, Saori; Tsumura, Norimichi
2018-02-01
In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.
Co-clustering Analysis of Weblogs Using Bipartite Spectral Projection Approach
DEFF Research Database (Denmark)
Xu, Guandong; Zong, Yu; Dolog, Peter
2010-01-01
Web clustering is an approach for aggregating Web objects into various groups according to underlying relationships among them. Finding co-clusters of Web objects is an interesting topic in the context of Web usage mining, which is able to capture the underlying user navigational interest...... and content preference simultaneously. In this paper we will present an algorithm using bipartite spectral clustering to co-cluster Web users and pages. The usage data of users visiting Web sites is modeled as a bipartite graph and the spectral clustering is then applied to the graph representation of usage...... data. The proposed approach is evaluated by experiments performed on real datasets, and the impact of using various clustering algorithms is also investigated. Experimental results have demonstrated the employed method can effectively reveal the subset aggregates of Web users and pages which...
Directory of Open Access Journals (Sweden)
A. Ehrlich
2008-12-01
Full Text Available Arctic boundary-layer clouds were investigated with remote sensing and in situ instruments during the Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR campaign in March and April 2007. The clouds formed in a cold air outbreak over the open Greenland Sea. Beside the predominant mixed-phase clouds pure liquid water and ice clouds were observed. Utilizing measurements of solar radiation reflected by the clouds three methods to retrieve the thermodynamic phase of the cloud are introduced and compared. Two ice indices I_{S} and I_{P} were obtained by analyzing the spectral pattern of the cloud top reflectance in the near infrared (1500–1800 nm wavelength spectral range which is characterized by ice and water absorption. While I_{S} analyzes the spectral slope of the reflectance in this wavelength range, I_{S} utilizes a principle component analysis (PCA of the spectral reflectance. A third ice index I_{A} is based on the different side scattering of spherical liquid water particles and nonspherical ice crystals which was recorded in simultaneous measurements of spectral cloud albedo and reflectance.
Radiative transfer simulations show that I_{S}, I_{P} and I_{A} range between 5 to 80, 0 to 8 and 1 to 1.25 respectively with lowest values indicating pure liquid water clouds and highest values pure ice clouds. The spectral slope ice index I_{S} and the PCA ice index I_{P} are found to be strongly sensitive to the effective diameter of the ice crystals present in the cloud. Therefore, the identification of mixed-phase clouds requires a priori knowledge of the ice crystal dimension. The reflectance-albedo ice index I_{A} is mainly dominated by the uppermost cloud layer (τ<1.5. Therefore, typical boundary-layer mixed-phase clouds with a liquid cloud top layer will
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
A variational approach to nucleation simulation.
Piaggi, Pablo M; Valsson, Omar; Parrinello, Michele
2016-12-22
We study by computer simulation the nucleation of a supersaturated Lennard-Jones vapor into the liquid phase. The large free energy barriers to transition make the time scale of this process impossible to study by ordinary molecular dynamics simulations. Therefore we use a recently developed enhanced sampling method [Valsson and Parrinello, Phys. Rev. Lett.113, 090601 (2014)] based on the variational determination of a bias potential. We differ from previous applications of this method in that the bias is constructed on the basis of the physical model provided by the classical theory of nucleation. We examine the technical problems associated with this approach. Our results are very satisfactory and will pave the way for calculating the nucleation rates in many systems.
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in
A numerical spectral approach to solve the dislocation density transport equation
International Nuclear Information System (INIS)
Djaka, K S; Taupin, V; Berbenni, S; Fressengeas, C
2015-01-01
A numerical spectral approach is developed to solve in a fast, stable and accurate fashion, the quasi-linear hyperbolic transport equation governing the spatio-temporal evolution of the dislocation density tensor in the mechanics of dislocation fields. The approach relies on using the Fast Fourier Transform algorithm. Low-pass spectral filters are employed to control both the high frequency Gibbs oscillations inherent to the Fourier method and the fast-growing numerical instabilities resulting from the hyperbolic nature of the transport equation. The numerical scheme is validated by comparison with an exact solution in the 1D case corresponding to dislocation dipole annihilation. The expansion and annihilation of dislocation loops in 2D and 3D settings are also produced and compared with finite element approximations. The spectral solutions are shown to be stable, more accurate for low Courant numbers and much less computation time-consuming than the finite element technique based on an explicit Galerkin-least squares scheme. (paper)
Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code
Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.
2016-06-01
We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.
Monte Carlo simulation of the spectral response of beta-particle emitters in LSC systems
International Nuclear Information System (INIS)
Ortiz, F.; Los Arcos, J.M.; Grau, A.; Rodriguez, L.
1992-01-01
This paper presents a new method to evaluate the counting efficiency and the effective spectra at the output of any dynodic stage, for any pure beta-particle emitter, measured in a liquid scintillation counting system with two photomultipliers working in sum-coincidence mode. The process is carried out by a Monte Carlo simulation procedure that gives the electron distribution, and consequently the counting efficiency, at any dynode, in response to the beta particles emitted, as a function of the figure of merit of the system and the dynodic gains. The spectral outputs for 3 H and 14 C have been computed and compared with experimental data obtained with two sets of quenched radioactive standards of these nuclides. (orig.)
Spectrally constrained NIR tomography for breast imaging: simulations and clinical results
Srinivasan, Subhadra; Pogue, Brian W.; Jiang, Shudong; Dehghani, Hamid; Paulsen, Keith D.
2005-04-01
A multi-spectral direct chromophore and scattering reconstruction for frequency domain NIR tomography has been implemented using constraints of the known molar spectra of the chromophores and a Mie theory approximation for scattering. This was tested in a tumor-simulating phantom containing an inclusion with higher hemoglobin, lower oxygenation and contrast in scatter. The recovered images were quantitatively accurate and showed substantial improvement over existing methods; and in addition, showed robust results tested for up to 5% noise in amplitude and phase measurements. When applied to a clinical subject with fibrocystic disease, the tumor was visible in hemoglobin and water, but no decrease in oxygenation was observed, making oxygen saturation, a potential diagnostic indicator.
Simulation for spectral response of solar-blind AlGaN based p-i-n photodiodes
Xue, Shiwei; Xu, Jintong; Li, Xiangyang
2015-04-01
In this article, we introduced how to build a physical model of refer to the device structure and parameters. Simulations for solar-blind AlGaN based p-i-n photodiodes spectral characteristics were conducted in use of Silvaco TCAD, where device structure and parameters are comprehensively considered. In simulation, the effects of polarization, Urbach tail, mobility, saturated velocities and lifetime in AlGaN device was considered. Especially, we focused on how the concentration-dependent Shockley-Read-Hall (SRH) recombination model affects simulation results. By simulating, we analyzed the effects in spectral response caused by TAUN0 and TAUP0, and got the values of TAUN0 and TAUP0 which can bring a result coincides with test results. After that, we changed their values and made the simulation results especially the part under 255 nm performed better. In conclusion, the spectral response between 200 nm and 320 nm of solar-blind AlGaN based p-i-n photodiodes were simulated and compared with test results. We also found that TAUN0 and TAUP0 have a large impact on spectral response of AlGaN material.
SOA approach to battle command: simulation interoperability
Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.
2010-04-01
NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.
Measurement of high-temperature spectral emissivity using integral blackbody approach
Pan, Yijie; Dong, Wei; Lin, Hong; Yuan, Zundong; Bloembergen, Pieter
2016-11-01
Spectral emissivity is one of the most critical thermophysical properties of a material for heat design and analysis. Especially in the traditional radiation thermometry, normal spectral emissivity is very important. We developed a prototype instrument based upon an integral blackbody method to measure material's spectral emissivity at elevated temperatures. An optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit was used to implemented the system. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated reference blackbody which had an effective total emissivity greater than 0.985. During the measurement, a pneumatic cylinder pushed a graphite rode and then the sample crucible to the cold opening within hundreds of microseconds. The linear pyrometer was used to monitor the brightness temperature of the sample surface, and the corresponding opto-converted voltage was fed and recorded by a digital multimeter. To evaluate the temperature drop of the sample along the pushing process, a physical model was proposed. The tube was discretized into several isothermal cylindrical rings, and the temperature of each ring was measurement. View factors between sample and rings were utilized. Then, the actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage signal and the calculated actual temperature, normal spectral emissivity under the that temperature point was obtained. Graphite sample at 1300°C was measured to prove the validity of the method.
Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach
Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.
2018-03-01
Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.
A new computationally-efficient computer program for simulating spectral gamma-ray logs
International Nuclear Information System (INIS)
Conaway, J.G.
1995-01-01
Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility
Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS
Willison, A.; Bedard, D.
This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where
CSIR Research Space (South Africa)
Cho, Moses A
2010-11-01
Full Text Available sensing. The objectives of this paper were to (i) evaluate the classification performance of a multiple-endmember spectral angle mapper (SAM) classification approach (conventionally known as the nearest neighbour) in discriminating ten common African...
A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries
Directory of Open Access Journals (Sweden)
Ivonne Bazan
2014-01-01
Full Text Available By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime.
A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries
Bazan, Ivonne; Negreira, Carlos; Ramos, Antonio; Brum, Javier; Ramirez, Alfredo
2014-01-01
By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime. PMID:24688596
Hyper-Spectral Networking Concept of Operations and Future Air Traffic Management Simulations
Davis, Paul; Boisvert, Benjamin
2017-01-01
The NASA sponsored Hyper-Spectral Communications and Networking for Air Traffic Management (ATM) (HSCNA) project is conducting research to improve the operational efficiency of the future National Airspace System (NAS) through diverse and secure multi-band, multi-mode, and millimeter-wave (mmWave) wireless links. Worldwide growth of air transportation and the coming of unmanned aircraft systems (UAS) will increase air traffic density and complexity. Safe coordination of aircraft will require more capable technologies for communications, navigation, and surveillance (CNS). The HSCNA project will provide a foundation for technology and operational concepts to accommodate a significantly greater number of networked aircraft. This paper describes two of the HSCNA projects technical challenges. The first technical challenge is to develop a multi-band networking concept of operations (ConOps) for use in multiple phases of flight and all communication link types. This ConOps will integrate the advanced technologies explored by the HSCNA project and future operational concepts into a harmonized vision of future NAS communications and networking. The second technical challenge discussed is to conduct simulations of future ATM operations using multi-bandmulti-mode networking and technologies. Large-scale simulations will assess the impact, compared to todays system, of the new and integrated networks and technologies under future air traffic demand.
Statistical learning method in regression analysis of simulated positron spectral data
International Nuclear Information System (INIS)
Avdic, S. Dz.
2005-01-01
Positron lifetime spectroscopy is a non-destructive tool for detection of radiation induced defects in nuclear reactor materials. This work concerns the applicability of the support vector machines method for the input data compression in the neural network analysis of positron lifetime spectra. It has been demonstrated that the SVM technique can be successfully applied to regression analysis of positron spectra. A substantial data compression of about 50 % and 8 % of the whole training set with two and three spectral components respectively has been achieved including a high accuracy of the spectra approximation. However, some parameters in the SVM approach such as the insensitivity zone e and the penalty parameter C have to be chosen carefully to obtain a good performance. (author)
Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia
2015-01-01
The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.
International Nuclear Information System (INIS)
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-01-01
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2004-07-01
It is well known that regional climate simulations are sensitive to the size and position of the domain chosen for calculations. Here we study the physical mechanisms of this sensitivity. We conducted simulations with the Regional Atmospheric Modeling System (RAMS) for June 2000 over North America at 50 km horizontal resolution using a 7500 km × 5400 km grid and NCEP/NCAR reanalysis as boundary conditions. The position of the domain was displaced in several directions, always maintaining the U.S. in the interior, out of the buffer zone along the lateral boundaries. Circulation biases developed a large scale structure, organized by the Rocky Mountains, resulting from a systematic shifting of the synoptic wave trains that crossed the domain. The distortion of the large-scale circulation was produced by interaction of the modeled flow with the lateral boundaries of the nested domain and varied when the position of the grid was altered. This changed the large-scale environment among the different simulations and translated into diverse conditions for the development of the mesoscale processes that produce most of precipitation for the Great Plains in the summer season. As a consequence, precipitation results varied, sometimes greatly, among the experiments with the different grid positions. To eliminate the dependence of results on the position of the domain, we used spectral nudging of waves longer than 2500 km above the boundary layer. Moisture was not nudged at any level. This constrained the synoptic scales to follow reanalysis while allowing the model to develop the small-scale dynamics responsible for the rainfall. Nudging of the large scales successfully eliminated the variation of precipitation results when the grid was moved. We suggest that this technique is necessary for all downscaling studies with regional models with domain sizes of a few thousand kilometers and larger embedded in global models.
International Nuclear Information System (INIS)
Nahavandi, N.; Minuchehr, A.; Zolfaghari, A.; Abbasi, M.
2015-01-01
Highlights: • Powerful hp-SEM refinement approach for P N neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
A spectral k-means approach to bright-field cell image segmentation.
Bradbury, Laura; Wan, Justin W L
2010-01-01
Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.
Colin, Jeanne; Déqué, Michel; Radu, Raluca; Somot, Samuel
2010-10-01
We assess the impact of two sources of uncertainties in a limited area model (LAM) on the representation of intense precipitation: the size of the domain of integration and the use of the spectral nudging technique (driving of the large-scale within the domain of integration). We work in a perfect-model approach where the LAM is driven by a general circulation model (GCM) run at the same resolution and sharing the same physics and dynamics as the LAM. A set of three 50 km resolution simulations run over Western Europe with the LAM ALADIN-Climate and the GCM ARPEGE-Climate are performed to address this issue. Results are consistent with previous studies regarding the seasonal-mean fields. Furthermore, they show that neither the use of the spectral nudging nor the choice of a small domain are detrimental to the modelling of heavy precipitation in the present experiment.
A. Garba, Aminata
2017-01-01
This paper presents a new approach to optical Code Division Multiple Access (CDMA) network transmission scheme using alternated amplitude sequences and energy differentiation at the transmitters to allow concurrent and secure transmission of several signals. The proposed system uses error control encoding and soft-decision demodulation to reduce the multi-user interference at the receivers. The design of the proposed alternated amplitude sequences, the OCDMA energy modulators and the soft decision, single-user demodulators are also presented. Simulation results show that the proposed scheme allows achieving spectral efficiencies higher than several reported results for optical CDMA and much higher than the Gaussian CDMA capacity limit.
Berezin, K. V.; Shagautdinova, I. T.; Chernavina, M. L.; Novoselova, A. V.; Dvoretskii, K. N.; Likhter, A. M.
2017-09-01
The experimental vibrational IR spectra of the outer part of lemon peel are recorded in the range of 3800-650 cm-1. The effect of artificial and natural dehydration of the peel on its vibrational spectrum is studied. It is shown that the colored outer layer of lemon peel does not have a noticeable effect on the vibrational spectrum. Upon 28-day storage of a lemon under natural laboratory conditions, only sequential dehydration processes are reflected in the vibrational spectrum of the peel. Within the framework of the theoretical DFT/B3LYP/6-31G(d) method, a model of a plant cell wall is developed consisting of a number of polymeric molecules of dietary fibers like cellulose, hemicellulose, pectin, lignin, some polyphenolic compounds (hesperetin glycoside-flavonoid), and a free water cluster. Using a supermolecular approach, the spectral properties of the wall of a lemon peel cell was simulated, and a detailed theoretical interpretation of the recorded vibrational spectrum is given.
A New Spectral Shape-Based Record Selection Approach Using Np and Genetic Algorithms
Directory of Open Access Journals (Sweden)
Edén Bojórquez
2013-01-01
Full Text Available With the aim to improve code-based real records selection criteria, an approach inspired in a parameter proxy of spectral shape, named Np, is analyzed. The procedure is based on several objectives aimed to minimize the record-to-record variability of the ground motions selected for seismic structural assessment. In order to select the best ground motion set of records to be used as an input for nonlinear dynamic analysis, an optimization approach is applied using genetic algorithms focuse on finding the set of records more compatible with a target spectrum and target Np values. The results of the new Np-based approach suggest that the real accelerograms obtained with this procedure, reduce the scatter of the response spectra as compared with the traditional approach; furthermore, the mean spectrum of the set of records is very similar to the target seismic design spectrum in the range of interest periods, and at the same time, similar Np values are obtained for the selected records and the target spectrum.
Directory of Open Access Journals (Sweden)
Maria Mallén-Alberdi
2016-03-01
Full Text Available Impedance-based biosensors for bacterial detection offer a rapid and cost-effective alternative to conventional techniques that are time-consuming and require specialized equipment and trained users. In this work, a new bacteria detection scheme is presented based on impedance measurements with antibody-modified polysilicon interdigitated electrodes (3 μm pitch, IDEs. The detection approach was carried out taking advantage of the E. coli structure which, in electrical terms, is constituted by two insulating cell membranes that separate a conductive cytoplasmatic medium and a more conductive periplasm. Impedance detection of bacteria is usually analyzed using electrical equivalent circuit models that show limitations for the interpretation of such complex cell structure. Here, a differential impedance spectrum representation is used to study the unique fingerprint that arises when bacteria attach to the surface of IDEs. That fingerprint shows the dual electrical behavior, insulating and conductive, at different frequency ranges. In parallel, finite-element simulations of this system using a three-shell bacteria model are performed to explain such phenomena. Overall, a new approach to detect bacteria is proposed that also enables to differentiate viable bacteria from other components non-specifically attached to the IDE surface by just detecting their spectral fingerprints. Keywords: Impedance spectroscopy, Bacterial detection, Interdigitated electrodes, Label-free detection, Immuno-detection, E. coli O157:H7
International Nuclear Information System (INIS)
Koch, Stephan
2009-01-01
This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The
Farhat, A.; Menif, M.; Rezig, H.
2013-09-01
This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.
Directory of Open Access Journals (Sweden)
Anne Clasen
2015-11-01
Full Text Available Forest biochemical and biophysical variables and their spatial and temporal distribution are essential inputs to process-orientated ecosystem models. To provide this information, imaging spectroscopy appears to be a promising tool. In this context, the present study investigates the potential of spectral unmixing to derive sub-pixel crown component fractions in a temperate deciduous forest ecosystem. However, the high proportion of foliage in this complex vegetation structure leads to the problem of saturation effects, when applying broadband vegetation indices. This study illustrates that multiple endmember spectral mixture analysis (MESMA can contribute to overcoming this challenge. Reference fractional abundances, as well as spectral measurements of the canopy components, could be precisely determined from a crane measurement platform situated in a deciduous forest in North-East Germany. In contrast to most other studies, which only use leaf and soil endmembers, this experimental setup allowed for the inclusion of a bark endmember for the unmixing of components within the canopy. This study demonstrates that the inclusion of additional endmembers markedly improves the accuracy. A mean absolute error of 7.9% could be achieved for the fractional occurrence of the leaf endmember and 5.9% for the bark endmember. In order to evaluate the results of this field-based study for airborne and satellite-based remote sensing applications, a transfer to Airborne Imaging Spectrometer for Applications (AISA and simulated Environmental Mapping and Analysis Program (EnMAP and Sentinel-2 imagery was carried out. All sensors were capable of unmixing crown components with a mean absolute error ranging between 3% and 21%.
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama
2015-04-23
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia
2015-01-01
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Rudianto, Indra; Sudarmaji
2018-04-01
We present an implementation of the spectral-element method for simulation of two-dimensional elastic wave propagation in fully heterogeneous media. We have incorporated most of realistic geological features in the model, including surface topography, curved layer interfaces, and 2-D wave-speed heterogeneity. To accommodate such complexity, we use an unstructured quadrilateral meshing technique. Simulation was performed on a GPU cluster, which consists of 24 core processors Intel Xeon CPU and 4 NVIDIA Quadro graphics cards using CUDA and MPI implementation. We speed up the computation by a factor of about 5 compared to MPI only, and by a factor of about 40 compared to Serial implementation.
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
International Nuclear Information System (INIS)
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-01-01
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
Simulation: a new approach to teaching ethics.
Buxton, Margaret; Phillippi, Julia C; Collins, Michelle R
2015-01-01
The importance of ethical conduct in health care was acknowledged as early as the fifth century in the Hippocratic Oath and continues to be an essential element of clinical practice. Providers face ethical dilemmas that are complex and unfold over time, testing both practitioners' knowledge and communication skills. Students learning to be health care providers need to develop the knowledge and skills necessary to negotiate complex situations involving ethical conflict. Simulation has been shown to be an effective learning environment for students to learn and practice complex and overlapping skills sets. However, there is little guidance in the literature on constructing effective simulation environments to assist students in applying ethical concepts. This article describes realistic simulations with trained, standardized patients that present ethical problems to graduate-level nurse-midwifery students. Student interactions with the standardized patients were monitored by faculty and peers, and group debriefing was used to help explore students' emotions and reactions. Student feedback postsimulation was exceedingly positive. This simulation could be easily adapted for use by health care education programs to assist students in developing competency with ethics. © 2014 by the American College of Nurse-Midwives.
Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles
Male , Jean-Michel; Fezoui , Loula ,
1993-01-01
La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...
Common modelling approaches for training simulators for nuclear power plants
International Nuclear Information System (INIS)
1990-02-01
Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs
Energy Technology Data Exchange (ETDEWEB)
Ewall-Wice, Aaron; Hewitt, Jacqueline; Neben, Abraham R. [MIT Kavli Institute for Cosmological Physics, Cambridge, MA, 02139 (United States); Bradley, Richard; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia [National Radio Astronomy Observatory, Charlottesville, VA (United States); Deboer, David; Parsons, Aaron; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S. [Department of Astronomy, University of California, Berkeley, CA (United States); Aguirre, James [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Bowman, Judd; Thyagarajan, Nithyanandan [Arizona State University, School of Earth and Space Exploration, Tempe, AZ 85287 (United States); Venter, Mariet [Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, SA (South Africa); Acedo, Eloy de Lera [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); and others
2016-11-10
We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish’s design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA’s suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA’s ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at a relatively low cost. We find that electromagnetic resonances in the HERA feed’s cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays (-40 dB at 200 ns), which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below k {sub ∥} ≲ 0.2 h Mpc{sup -1}, in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization.
International Nuclear Information System (INIS)
Ewall-Wice, Aaron; Hewitt, Jacqueline; Neben, Abraham R.; Bradley, Richard; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia; Deboer, David; Parsons, Aaron; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S.; Aguirre, James; Bowman, Judd; Thyagarajan, Nithyanandan; Venter, Mariet; Acedo, Eloy de Lera
2016-01-01
We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish’s design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA’s suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA’s ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at a relatively low cost. We find that electromagnetic resonances in the HERA feed’s cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays (-40 dB at 200 ns), which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below k ∥ ≲ 0.2 h Mpc -1 , in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization.
DEFF Research Database (Denmark)
Rogge, Derek; Bachmann, Martin; Rivard, Benoit
2014-01-01
Spectral decorrelation (transformations) methods have long been used in remote sensing. Transformation of the image data onto eigenvectors that comprise physically meaningful spectral properties (signal) can be used to reduce the dimensionality of hyperspectral images as the number of spectrally...... distinct signal sources composing a given hyperspectral scene is generally much less than the number of spectral bands. Determining eigenvectors dominated by signal variance as opposed to noise is a difficult task. Problems also arise in using these transformations on large images, multiple flight...... and spectral subsampling to the data, which is accomplished by deriving a limited set of eigenvectors for spatially contiguous subsets. These subset eigenvectors are compiled together to form a new noise reduced data set, which is subsequently used to derive a set of global orthogonal eigenvectors. Data from...
Approaching Sentient Building Performance Simulation Systems
DEFF Research Database (Denmark)
Negendahl, Kristoffer; Perkov, Thomas; Heller, Alfred
2014-01-01
Sentient BPS systems can combine one or more high precision BPS and provide near instantaneous performance feedback directly in the design tool, thus providing speed and precision of building performance in the early design stages. Sentient BPS systems are essentially combining: 1) design tools, 2......) parametric tools, 3) BPS tools, 4) dynamic databases 5) interpolation techniques and 6) prediction techniques as a fast and valid simulation system, in the early design stage....
A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring
Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia
2014-05-01
One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could
Digital simulation of an arbitrary stationary stochastic process by spectral representation
DEFF Research Database (Denmark)
Yura, Harold T.; Hanson, Steen Grüner
2011-01-01
of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited...
Residents' perceptions of simulation as a clinical learning approach.
Walsh, Catharine M; Garg, Ankit; Ng, Stella L; Goyal, Fenny; Grover, Samir C
2017-02-01
Simulation is increasingly being integrated into medical education; however, there is little research into trainees' perceptions of this learning modality. We elicited trainees' perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4-6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Residents' perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents' markedly narrow perception of simulation's capacity to support non-technical skills development or its use beyond introductory learning. Trainees' learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees' a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation.
Divide and conquer approach to quantum Hamiltonian simulation
Hadfield, Stuart; Papageorgiou, Anargyros
2018-04-01
We show a divide and conquer approach for simulating quantum mechanical systems on quantum computers. We can obtain fast simulation algorithms using Hamiltonian structure. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under very mild assumptions.
Simulation approach towards energy flexible manufacturing systems
Beier, Jan
2017-01-01
This authored monograph provides in-depth analysis and methods for aligning electricity demand of manufacturing systems to VRE supply. The book broaches both long-term system changes and real-time manufacturing execution and control, and the author presents a concept with different options for improved energy flexibility including battery, compressed air and embodied energy storage. The reader will also find a detailed application procedure as well as an implementation into a simulation prototype software. The book concludes with two case studies. The target audience primarily comprises research experts in the field of green manufacturing systems. .
Hidden Statistics Approach to Quantum Simulations
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the
International Nuclear Information System (INIS)
Meng, L.
2012-01-01
Improving the knowledge of the spectral and temporal properties of plasma-based XUV lasers is an important issue for the ongoing development of these sources towards significantly higher peak power. The spectral properties of the XUV laser line actually control several physical quantities that are important for applications, such as the minimum duration that can be achieved (Fourier-transform limit). The shortest duration experimentally achieved to-date is ∼1 picosecond. The demonstrated technique of seeding XUV laser plasmas with a coherent femtosecond pulse of high-order harmonic radiation opens new and promising prospects to reduce the duration to a few 100 fs, provided that the gain bandwidth can be kept large enough.XUV lasers pumped by collisional excitation of Ni-like and Ne-like ions have been developed worldwide in hot plasmas created either by fast electrical discharge, or by various types of high-power lasers. This leads to a variety of XUV laser sources with distinct output properties, but also markedly different plasma parameters (density, temperature) in the amplification zone. Hence different spectral properties are expected. The purpose of our work was then to investigate the spectral behaviour of the different types of existing collisional excitation XUV lasers, and to evaluate their potential to support amplification of pulses with duration below 1 ps in a seeded mode.The spectral characterization of plasma-based XUV lasers is challenging because the extremely narrow bandwidth (typically Δλ/λ ∼10 -5 ) lies beyond the resolution limit of existing spectrometers in this spectral range. In our work the narrow linewidth was resolved using a wavefront-division interferometer specifically designed to measure temporal coherence, from which the spectral linewidth is inferred. We have characterized three types of collisional XUV lasers, developed in three different laboratories: transient pumping in Ni-like Mo, capillary discharge pumping in Ne
OPEN SOURCE APPROACH TO URBAN GROWTH SIMULATION
Directory of Open Access Journals (Sweden)
A. Petrasova
2016-06-01
Full Text Available Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban – Regional Environment Simulation – a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
International Nuclear Information System (INIS)
Wilson, R.D.; Conaway, J.G.
1991-01-01
We have developed Monte Carlo and discrete ordinates simulation models for the large-detector spectral gamma-ray (SGR) logging tool in use at the Nevada Test Site. Application of the simulation models produced spectra for source layers on the borehole wall, either from potassium-bearing mudcakes or from plate-out of radon daughter products. Simulations show that the shape and magnitude of gamma-ray spectra from sources distributed on the borehole wall depend on radial position with in the air-filled borehole as well as on hole diameter. No such dependence is observed for sources uniformly distributed in the formation. In addition, sources on the borehole wall produce anisotropic angular fluxes at the higher scattered energies and at the source energy. These differences in borehole effects and in angular flux are important to the process of correcting SGR logs for the presence of potassium mudcakes; they also suggest a technique for distinguishing between spectral contributions from formation sources and sources on the borehole wall. These results imply the existence of a standoff effect not present for spectra measured in air-filled boreholes from formation sources. 5 refs., 11 figs
Simulation Approach to Mission Risk and Reliability Analysis, Phase I
National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...
Residents’ perceptions of simulation as a clinical learning approach
Walsh, Catharine M.; Garg, Ankit; Ng, Stella L.; Goyal, Fenny; Grover, Samir C.
2017-01-01
Background Simulation is increasingly being integrated into medical education; however, there is little research into trainees’ perceptions of this learning modality. We elicited trainees’ perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. Methods We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4–6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Results Residents’ perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents’ markedly narrow perception of simulation’s capacity to support non-technical skills development or its use beyond introductory learning. Conclusion Trainees’ learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees’ a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation. PMID:28344719
Toward a More Just Approach to Poverty Simulations
Browne, Laurie P.; Roll, Susan
2016-01-01
Poverty simulations are a promising approach to engaging college students in learning about poverty because they provide direct experience with this critical social issue. Much of the extant scholarship on simulations describe them as experiential learning; however, it appears that educators do not examine biases, assumptions, and traditions of…
Magnetite thin films: A simulational approach
International Nuclear Information System (INIS)
Mazo-Zuluaga, J.; Restrepo, J.
2006-01-01
In the present work the study of the magnetic properties of magnetite thin films is addressed by means of the Monte Carlo method and the Ising model. We simulate LxLxd magnetite thin films (d being the film thickness and L the transversal linear dimension) with periodic boundary conditions along transversal directions and free boundary conditions along d direction. In our model, both the three-dimensional inverse spinel structure and the interactions scheme involving tetrahedral and octahedral sites have been considered in a realistic way. Results reveal a power-law dependence of the critical temperature with the film thickness accordingly by an exponent ν=0.81 and ruled out by finite-size scaling theory. Estimates for the critical exponents of the magnetization and the specific heat are finally presented and discussed
An integrated approach to fingerprint indexing using spectral clustering based on minutiae points
CSIR Research Space (South Africa)
Mngenge, NA
2015-07-01
Full Text Available this problem by constructing a rotational, scale and translation (RST) invariant fingerprint descriptor based on minutiae points. The proposed RST invariant descriptor dimensions are then reduced and passed to a spectral clustering algorithm which automatically...
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Nallala, Jayakrupakar; Gobinet, Cyril; Diebold, Marie-Danièle; Untereiner, Valérie; Bouché, Olivier; Manfait, Michel; Sockalingum, Ganesh Dhruvananda; Piot, Olivier
2012-11-01
Innovative diagnostic methods are the need of the hour that could complement conventional histopathology for cancer diagnosis. In this perspective, we propose a new concept based on spectral histopathology, using IR spectral micro-imaging, directly applied to paraffinized colon tissue array stabilized in an agarose matrix without any chemical pre-treatment. In order to correct spectral interferences from paraffin and agarose, a mathematical procedure is implemented. The corrected spectral images are then processed by a multivariate clustering method to automatically recover, on the basis of their intrinsic molecular composition, the main histological classes of the normal and the tumoral colon tissue. The spectral signatures from different histological classes of the colonic tissues are analyzed using statistical methods (Kruskal-Wallis test and principal component analysis) to identify the most discriminant IR features. These features allow characterizing some of the biomolecular alterations associated with malignancy. Thus, via a single analysis, in a label-free and nondestructive manner, main changes associated with nucleotide, carbohydrates, and collagen features can be identified simultaneously between the compared normal and the cancerous tissues. The present study demonstrates the potential of IR spectral imaging as a complementary modern tool, to conventional histopathology, for an objective cancer diagnosis directly from paraffin-embedded tissue arrays.
Lang, Harold R.
1991-01-01
A new approach to stratigraphic analysis is described which uses photogeologic and spectral interpretation of multispectral remote sensing data combined with topographic information to determine the attitude, thickness, and lithology of strata exposed at the surface. The new stratigraphic procedure is illustrated by examples in the literature. The published results demonstrate the potential of spectral stratigraphy for mapping strata, determining dip and strike, measuring and correlating stratigraphic sequences, defining lithofacies, mapping biofacies, and interpreting geological structures.
Sánchez-Sesma, Francisco J.
2017-07-01
Microtremor H/ V spectral ratio (MHVSR) has gained popularity to assess the dominant frequency of soil sites. It requires measurement of ground motion due to seismic ambient noise at a site and a relatively simple processing. Theory asserts that the ensemble average of the autocorrelation of motion components belonging to a diffuse field at a given receiver gives the directional energy densities (DEDs) which are proportional to the imaginary parts of the Green's function components when both source and receiver are the same point and the directions of force and response coincide. Therefore, the MHVSR can be modeled as the square root of 2 × Im G 11/Im G 33, where Im G 11 and Im G 33 are the imaginary parts of Green's functions at the load point for the horizontal (sub-index 1) and vertical (sub-index 3) components, respectively. This connection has physical implications that emerge from the duality DED force and allows understanding the behavior of the MHVSR. For a given model, the imaginary parts of the Green's functions are integrals along a radial wavenumber. To deal with these integrals, we have used either the popular discrete wavenumber method or the Cauchy's residue theorem at the poles that account for surface waves normal modes giving the contributions due to Rayleigh and Love waves. For the retrieval of the velocity structure, one can minimize the weighted differences between observations and calculated values using the strategy of an inversion scheme. In this research, we used simulated annealing but other optimization techniques can be used as well. This last approach allows computing separately the contributions of different wave types. An example is presented for the mouth of Andarax River at Almería, Spain. [Figure not available: see fulltext.
Saad, Bilal Mohammed
2017-09-18
This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.
Saad, Bilal Mohammed; Alexanderian, Alen; Prudhomme, Serge; Knio, Omar
2017-01-01
This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.
International Nuclear Information System (INIS)
Malone, R.C.; Pitcher, E.J.; Blackmon, M.L.; Puri, K.; Bourke, W.
1984-01-01
We examine the characteristics of stationary and transient eddies in the geopotential-height field as simulated by a spectral general circulation model. The model possessess a realistic, but smootheed, topography. Two simulations with perpetual January and July forcing by climatological sea surface temperatures, sea ice, and insolation were extended to 1200 days, of which the final 600 days were used for the results in this study. We find that the stationary waves are well simulated in both seasons in the Northern Hemisphere, where strong forcing by orography and land-sea thermal contrast exists. However, in the Southern Hemisphere, where no continents are present in midlatitudes, the stationary waves have smaller amplitude than that observed in both seasons. In both hemispheres, the transient eddies are well simulated in the winter season but are too weak in the summer season. The model fails to generate a sufficiently intense summertime midlatitude jet in either hemisphere, and this results in a low level of transient activity. The variance in the tropical troposphere is very well simulated. We examine the geographical distribution and vertical structure of the transient eddies. Fourier analysis in zonal wavenumber and temporal filtering are used to display the wavelength and frequency characteristics of the eddies
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
Chadel, Meriem; Bouzaki, Mohammed Moustafa; Chadel, Asma; Petit, Pierre; Sawicki, Jean-Paul; Aillerie, Michel; Benyoucef, Boumediene
2017-02-01
We present and analyze experimental results obtained with a laboratory setup based on a hardware and smart instrumentation for the complete study of performance of PV panels using for illumination an artificial radiation source (Halogen lamps). Associated to an accurate analysis, this global experimental procedure allows the determination of effective performance under standard conditions thanks to a simulation process originally developed under Matlab software environment. The uniformity of the irradiated surface was checked by simulation of the light field. We studied the response of standard commercial photovoltaic panels under enlightenment measured by a spectrometer with different spectra for two sources, halogen lamps and sunlight. Then, we bring a special attention to the influence of the spectral distribution of light on the characteristics of photovoltaic panel, that we have performed as a function of temperature and for different illuminations with dedicated measurements and studies of the open circuit voltage and short-circuit current.
A Simulation Approach for Performance Validation during Embedded Systems Design
Wang, Zhonglei; Haberl, Wolfgang; Herkersdorf, Andreas; Wechs, Martin
Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.
St. Fleur, Sadrac; Bertrand, Etienne; Courboulex, Francoise; Mercier de Lépinay, Bernard; Deschamps, Anne; Hough, Susan E.; Cultrera, Giovanna; Boisson, Dominique; Prepetit, Claude
2016-01-01
To provide better insight into seismic ground motion in the Port‐au‐Prince metropolitan area, we investigate site effects at 12 seismological stations by analyzing 78 earthquakes with magnitude smaller than 5 that occurred between 2010 and 2013. Horizontal‐to‐vertical spectral ratio on earthquake recordings and a standard spectral ratio were applied to the seismic data. We also propose a simplified lithostratigraphic map and use available geotechnical and geophysical data to construct representative soil columns in the vicinity of each station that allow us to compute numerical transfer functions using 1D simulations. At most of the studied sites, spectral ratios are characterized by weak‐motion amplification at frequencies above 5 Hz, in good agreement with the numerical transfer functions. A mismatch between the observed amplifications and simulated response at lower frequencies shows that the considered soil columns could be missing a deeper velocity contrast. Furthermore, strong amplification between 2 and 10 Hz linked to local topographic features is found at one station located in the south of the city, and substantial amplification below 5 Hz is detected near the coastline, which we attribute to deep and soft sediments as well as the presence of surface waves. We conclude that for most investigated sites in Port‐au‐Prince, seismic amplifications due to site effects are highly variable but seem not to be important at high frequencies. At some specific locations, however, they could strongly enhance the low‐frequency content of the seismic ground shaking. Although our analysis does not consider nonlinear effects, we thus conclude that, apart from sites close to the coast, sediment‐induced amplification probably had only a minor impact on the level of strong ground motion, and was not the main reason for the high level of damage in Port‐au‐Prince.
Spectral dimension in causal set quantum gravity
International Nuclear Information System (INIS)
Eichhorn, Astrid; Mizera, Sebastian
2014-01-01
We evaluate the spectral dimension in causal set quantum gravity by simulating random walks on causal sets. In contrast to other approaches to quantum gravity, we find an increasing spectral dimension at small scales. This observation can be connected to the nonlocality of causal set theory that is deeply rooted in its fundamentally Lorentzian nature. Based on its large-scale behaviour, we conjecture that the spectral dimension can serve as a tool to distinguish causal sets that approximate manifolds from those that do not. As a new tool to probe quantum spacetime in different quantum gravity approaches, we introduce a novel dimensional estimator, the causal spectral dimension, based on the meeting probability of two random walkers, which respect the causal structure of the quantum spacetime. We discuss a causal-set example, where the spectral dimension and the causal spectral dimension differ, due to the existence of a preferred foliation. (paper)
Parsani, Matteo; Ghorbaniasl, Ghader; Lacor, C.
2011-01-01
. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases
Impacts of using spectral nudging on regional climate model RCA4 simulations of the Arctic
Directory of Open Access Journals (Sweden)
P. Berg
2013-06-01
Full Text Available The performance of the Rossby Centre regional climate model RCA4 is investigated for the Arctic CORDEX (COordinated Regional climate Downscaling EXperiment region, with an emphasis on its suitability to be coupled to a regional ocean and sea ice model. Large biases in mean sea level pressure (MSLP are identified, with pronounced too-high pressure centred over the North Pole in summer of over 5 hPa, and too-low pressure in winter of a similar magnitude. These lead to biases in the surface winds, which will potentially lead to strong sea ice biases in a future coupled system. The large-scale circulation is believed to be the major reason for the biases, and an implementation of spectral nudging is applied to remedy the problems by constraining the large-scale components of the driving fields within the interior domain. It is found that the spectral nudging generally corrects for the MSLP and wind biases, while not significantly affecting other variables, such as surface radiative components, two-metre temperature and precipitation.
Monte Carlo simulation of spectral reflectance and BRDF of the bubble layer in the upper ocean.
Ma, Lanxin; Wang, Fuqiang; Wang, Chengan; Wang, Chengchao; Tan, Jianyu
2015-09-21
The presence of bubbles can significantly change the radiative properties of seawater and these changes will affect remote sensing and underwater target detection. In this work, the spectral reflectance and bidirectional reflectance characteristics of the bubble layer in the upper ocean are investigated using the Monte Carlo method. The Hall-Novarini (HN) bubble population model, which considers the effect of wind speed and depth on the bubble size distribution, is used. The scattering coefficients and the scattering phase functions of bubbles in seawater are calculated using Mie theory, and the inherent optical properties of seawater for wavelengths between 300 nm and 800 nm are related to chlorophyll concentration (Chl). The effects of bubble coating, Chl, and bubble number density on the spectral reflectance of the bubble layer are studied. The bidirectional reflectance distribution function (BRDF) of the bubble layer for both normal and oblique incidence is also investigated. The results show that bubble populations in clear waters under high wind speed conditions significantly influence the reflection characteristics of the bubble layer. Furthermore, the contribution of bubble populations to the reflection characteristics is mainly due to the strong backscattering of bubbles that are coated with an organic film.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to
Lu, Wei; Sun, Jianfeng; Hou, Peipei; Xu, Qian; Xi, Yueli; Zhou, Yu; Zhu, Funan; Liu, Liren
2017-08-01
Performance of satellite laser communications between GEO and LEO satellites can be influenced by background light noise appeared in the field of view due to sunlight or planets and some comets. Such influences should be studied on the ground testing platform before the space application. In this paper, we introduce a simulator that can simulate the real case of background light noise in space environment during the data talking via laser beam between two lonely satellites. This simulator can not only simulate the effect of multi-wavelength spectrum, but also the effects of adjustable angles of field-of-view, large range of adjustable optical power and adjustable deflection speeds of light noise in space environment. We integrate these functions into a device with small and compact size for easily mobile use. Software control function is also achieved via personal computer to adjust these functions arbitrarily. Keywords:
D1+ Simulator: A cost and risk optimized approach to nuclear power plant simulator modernization
International Nuclear Information System (INIS)
Wischert, W.
2006-01-01
D1-Simulator is operated by Kraftwerks-Simulator-Gesellschaft (KSG) and Gesellschaft f?r Simulatorschulung (GfS) at the Simulator Centre in Essen since 1977. The full-scope control room training simulator, used for Kernkraftwerk Biblis (KWB) is based on a PDP-11 hardware platform and is mainly programmed in ASSEMBLER language. The Simulator has reached a continuous high availability of operation throughout the years due to specialized hardware and software support from KSG maintenance team. Nevertheless, D1-Simulator largely reveals limitations with respect to computer capacity and spares and suffers progressively from the non-availability of hardware replacement materials. In order to ensure long term maintainability within the framework of the consensus on nuclear energy, a 2-years refurbishing program has been launched by KWB focusing on quality and budgetary aspects. The so-called D1+ Simulator project is based on the re-use of validated data from existing simulators. Allowing for flexible project management methods, the project outlines a cost and risk optimized approach to Nuclear Power Plant (NPP) Simulator modernization. D1+ Simulator is being built by KSG/GfS in close collaboration with KWB and the simulator vendor THALES by re-using a modern hardware and software development environment from D56-Simulator, used by Kernkraftwerk Obrigheim (KWO) before its decommissioning in 2005. The Simulator project, launched in 2004, is expected to be completed by end of 2006. (author)
Spectral Approach to Derive the Representation Formulae for Solutions of the Wave Equation
Directory of Open Access Journals (Sweden)
Gusein Sh. Guseinov
2012-01-01
Full Text Available Using spectral properties of the Laplace operator and some structural formula for rapidly decreasing functions of the Laplace operator, we offer a novel method to derive explicit formulae for solutions to the Cauchy problem for classical wave equation in arbitrary dimensions. Among them are the well-known d'Alembert, Poisson, and Kirchhoff representation formulae in low space dimensions.
Viner, K.; Reinecke, P. A.; Gabersek, S.; Flagg, D. D.; Doyle, J. D.; Martini, M.; Ryglicki, D.; Michalakes, J.; Giraldo, F.
2016-12-01
NEPTUNE: the Navy Environmental Prediction sysTem Using the NUMA*corE, is a 3D spectral element atmospheric model composed of a full suite of physics parameterizations and pre- and post-processing infrastructure with plans for data assimilation and coupling components to a variety of Earth-system models. This talk will focus on the initial struggles and solutions in adapting NUMA for stable and accurate integration on the sphere using both the deep atmosphere equations and a newly developed shallow-atmosphere approximation, as demonstrated through idealized test cases. In addition, details of the physics-dynamics coupling methodology will be discussed. NEPTUNE results for test cases from the 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) will be shown and discussed. *NUMA: Nonhydrostatic Unified Model of the Atmosphere; Kelly and Giraldo 2012, JCP
Recent developments in the super transition array model for spectral simulation of LTE plasmas
International Nuclear Information System (INIS)
Bar-Shalom, A.; Oreg, J.; Goldstein, W.H.
1992-01-01
Recently developed sub-picosecond pulse lasers have been used to create hot, near solid density plasmas. Since these plasmas are nearly in local thermodynamic equilibrium (LTE), their emission spectra involve a huge number of populated configurations. A typical spectrum is a combination of many unresolved clusters of emission, each containing an immense number of overlapping, unresolvable bound-bound and bound-free transitions. Under LTE, or near LTE conditions, traditional detailed configuration or detailed term spectroscopic models are not capable of handling the vast number of transitions involved. The average atom (AA) model, on the other hand, accounts for all relevant transitions, but in an oversimplified fashion that ignores all spectral structure. The Super Transition Array (STA) model, which has been developed in recent years, combines the simplicity and comprehensiveness of the AA model with the accuracy of detailed term accounting. The resolvable structure of spectral clusters is revealed by successively increasing the number of distinct STA's, until convergence is attained. The limit of this procedure is a detailed unresolved transition array (UTA) spectrum, with a term-broadened line for each accessible configuration-to-configuration transition, weighted by the relevant Boltzman population. In practice, this UTA spectrum is actually obtained using only a few thousand to tens of thousands of STA's (as opposed, typically, to billions of UTAs). The central result of STA theory is a set of formulas for the moments (total intensity, average transition energy, variance) of an STA. In calculating the moments, detailed relativistic first order quantum transition energies and probabilities are used. The energy appearing in the Boltzman factor associated with each level in a superconfiguration is the zero order result corrected by a superconfiguration averaged first order correction. Examples and application to recent measurements are presented
Fleet Sizing of Automated Material Handling Using Simulation Approach
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
Adding Value in Construction Design Management by Using Simulation Approach
Doloi, Hemanta
2008-01-01
Simulation modelling has been introduced as a decision support tool for front end planning and design analysis of projects. An integrated approach has been discussed linking project scope, end product or project facility performance and the strategic project objectives at the early stage of projects. The case study example on tram network demonstrates that application of simulation helps assessing performance of project operation and making appropriate investment decisions over life cycle of ...
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
International Nuclear Information System (INIS)
Brunner, S.
1997-08-01
Ion temperature gradient (ITG)-related instabilities are studied in tokamak-like plasmas with the help of a new global eigenvalue code. Ions are modelled in the frame of gyrokinetic theory so that finite Larmor radius effects of these particles are retained to all orders. Non-adiabatic trapped electron dynamics is taken into account through the bounce-averaged drift kinetic equation. Assuming electrostatic perturbations, the system is closed with the quasineutrality relation. Practical methods are presented which make this global approach feasible. These include a non-standard wave decomposition compatible with the curved geometry as well as adapting an efficient root finding algorithm for computing the unstable spectrum. These techniques are applied to a low pressure configuration given by a large aspect ratio torus with circular, concentric magnetic surfaces. Simulations from a linear, time evolution, particle in cell code provide a useful benchmark. Comparisons with local ballooning calculations for different parameter scans enable further validation while illustrating the limits of that representation at low toroidal wave numbers or for non-interchange-like instabilities. The stabilizing effect of negative magnetic shear is also considered, in which case the global results show not only an attenuation of the growth rate but also a reduction of the radial extent induced by a transition from the toroidal- to the slab-ITG mode. Contributions of trapped electrons to the ITG instability as well as the possible coupling to the trapped electron mode are clearly brought to the fore. (author) figs., tabs., 69 refs
A Stigmergy Approach for Open Source Software Developer Community Simulation
Energy Technology Data Exchange (ETDEWEB)
Cui, Xiaohui [ORNL; Beaver, Justin M [ORNL; Potok, Thomas E [ORNL; Pullum, Laura L [ORNL; Treadwell, Jim N [ORNL
2009-01-01
The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agent behaviors selection probability.
Pseudo-spectral 3D simulations of streamers with adaptively refined grids
Luque, A.; Ebert, U.; Montijn, C.; Hundsdorfer, W.; Schmidt, J.; Simek, M.; Pekarek, S.; Prukner, V.
2007-01-01
A three-dimensional code for the simulation of streamers is introduced. The code is based on a fluid model for oxygen-nitrogen mixtures that includes drift, diffusion and attachement of electrons and creation of new charge carriers through impact ionization and photo-ionization. The electric field
A Monte Carlo simulation of scattering reduction in spectral x-ray computed tomography
DEFF Research Database (Denmark)
Busi, Matteo; Olsen, Ulrik Lund; Bergbäck Knudsen, Erik
2017-01-01
In X-ray computed tomography (CT), scattered radiation plays an important role in the accurate reconstruction of the inspected object, leading to a loss of contrast between the different materials in the reconstruction volume and cupping artifacts in the images. We present a Monte Carlo simulation...
Validation of the spectral mismatch correction factor using an LED-based solar simulator
DEFF Research Database (Denmark)
Riedel, Nicholas; Santamaria Lancia, Adrian Alejo; Thorsteinsson, Sune
LED-based solar simulators are gaining popularity in the PV characterization field. There are several reasons for this trend, but the primary interest is often the potential of tuning the light source spectrum to a closer match to the AM 1.5G reference spectrum than traditional Xenon or metal-hal...
Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.
Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut
2007-04-20
We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.
International Nuclear Information System (INIS)
Boccaleri, Enrico; Arrais, Aldo; Frache, Alberto; Gianelli, Walter; Fino, Paolo; Camino, Giovanni
2006-01-01
A wide series of carbon nanostructures (ranging from fullerenes, through carbon nanotubes, up to carbon nanofibers) promise to change several fields in material science, but a real industrial implementation depends on their availability at reasonable prices with affordable and reproducible degrees of purity. In this study we propose simple instrumental approaches to efficiently characterize different commercial samples, particularly for qualitative evaluation of impurities, the discrimination of their respective spectral features and, when possible, for quantitative determination. We critically discuss information that researchers in the field of nanocomposite technology can achieve in this aim by spectral techniques such as Raman and FT-IR spectroscopy, thermo-gravimetrical analysis, mass spectrometry-hyphenated thermogravimetry, X-ray diffraction and energy dispersive spectroscopy. All these can be helpful, in applied research on material science, for a fast reliable monitoring of the actual purity of carbon products in both commercial and laboratory-produced samples as well as in composite materials
Dahlberg, Peter D; Boughter, Christopher T; Faruk, Nabil F; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E; Hammond, Adam T
2016-11-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH 3 NH 3 PbBr 3 perovskites and measure differences between nanocrystal films and micron scale crystals.
Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.
2016-11-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH3NH3PbBr3 perovskites and measure differences between nanocrystal films and micron scale crystals.
Ying, Yingzi; Bean, Christopher J.
2014-05-01
Ocean-generated microseisms are faint Earth tremors associated with the interaction between ocean water waves and the solid Earth. The microseism noise recorded as low frequency ground vibrations by seismometers contains significant information about the Earth's interior and the sea states. In this work, we first aim to investigate the forward propagation of microseisms in a deep-ocean environment. We employ a 3D North-East Atlantic geological model and simulate wave propagation in a coupled fluid-solid domain, using a spectral-element method. The aim is to investigate the effects of the continental shelf on microseism wave propagation. A second goal of this work is to perform noise simulation to calculate synthetic ensemble averaged cross-correlations of microseism noise signals with time reversal method. The algorithm can relieve computational cost by avoiding time stacking and get cross-correlations between the designated master station and all the remaining slave stations, at one time. The origins of microseisms are non-uniform, so we also test the effect of simulated noise source distribution on the determined cross-correlations.
Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan
2011-01-01
The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.
Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.
2016-01-01
A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. Th...
Power spectral density analysis of wind-shear turbulence for related flight simulations. M.S. Thesis
Laituri, Tony R.
1988-01-01
Meteorological phenomena known as microbursts can produce abrupt changes in wind direction and/or speed over a very short distance in the atmosphere. These changes in flow characteristics have been labelled wind shear. Because of its adverse effects on aerodynamic lift, wind shear poses its most immediate threat to flight operations at low altitudes. The number of recent commercial aircraft accidents attributed to wind shear has necessitated a better understanding of how energy is transferred to an aircraft from wind-shear turbulence. Isotropic turbulence here serves as the basis of comparison for the anisotropic turbulence which exists in the low-altitude wind shear. The related question of how isotropic turbulence scales in a wind shear is addressed from the perspective of power spectral density (psd). The role of the psd in related Monte Carlo simulations is also considered.
International Nuclear Information System (INIS)
Chai, Jiale; Cheng, Qiang; Si, Mengting; Su, Yang; Zhou, Yifan; Song, Jinlin
2017-01-01
The spectral selective coating is becoming more and more popular against solar irradiation not only in keeping the coated objects stay cool but also retain the appearance of the objects by reducing the glare of reflected sunlight. In this work a numerical study is investigated to design the double-layer coating with different submicron particles to achieve better performance both in thermal and aesthetic aspects. By comparison, the performance of double-layer coating with TiO_2 and ZnO particles is better than that with single particles. What's more, the particle diameter, volume fraction of particle as well as substrate condition is also investigated. The results show that an optimized double-layer coating with particles should be the one with an appropriate particle diameter, volume fraction and the black substrate. - Highlights: • The double-layer coating has a great influence on both thermal and aesthetic aspects. • The double-layer coating performs better than the uniform one with single particles. • The volume fraction, particle diameter and substrate conditions are optimized.
A simplified approach for simulation of wake meandering
Energy Technology Data Exchange (ETDEWEB)
Thomsen, Kenneth; Aagaard Madsen, H.; Larsen, Gunner; Juul Larsen, T.
2006-03-15
This fact-sheet describes a simplified approach for a part of the recently developed dynamic wake model for aeroelastic simulations for wind turbines operating in wake. The part described in this fact-sheet concern the meandering process only, while the other part of the simplified approach the wake deficit profile is outside the scope of the present fact-sheet. Work on simplified models for the wake deficit profile is ongoing. (au)
Czech Academy of Sciences Publication Activity Database
Shklyar, D. R.; Storey, L. R. O.; Chum, Jaroslav; Jiříček, František; Němec, F.; Parrot, M.; Santolík, Ondřej; Titova, E. E.
2012-01-01
Roč. 117, A12 (2012), A12206/1-A12206/16 ISSN 0148-0227 R&D Projects: GA ČR GA205/09/1253; GA ČR GAP205/10/2279; GA MŠk ME09107 Grant - others:GA ČR(CZ) GPP209/12/P658 Program:GP Institutional support: RVO:68378289 Keywords : Plasma waves analysis * ion cyclotron waves * satellite observation and numerical simulation * geometrical optics * multi-component measurements * simulation * spectrogram * wave propagation Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 3.174, year: 2012 http://onlinelibrary.wiley.com/doi/10.1029/2012JA018016/abstract
Joiner, J.; Gaunter, L.; Lindstrot, R.; Voigt, M.; Vasilkov, A. P.; Middleton, E. M.; Huemmrich, K. F.; Yoshida, Y.; Frankenberg, C.
2013-01-01
Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2). The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT). GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0.5 deg × 0.5 deg
Directory of Open Access Journals (Sweden)
J. Joiner
2013-10-01
Full Text Available Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2. The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT. GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0
Numerical Simulation of Unsteady Compressible Flow in Convergent Channel: Pressure Spectral Analysis
Czech Academy of Sciences Publication Activity Database
Pořízková, P.; Kozel, Karel; Horáček, Jaromír
2012-01-01
Roč. 2012, č. 545120 (2012), s. 1-9 ISSN 1110-757X R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite volume method * simulation of flow in vibrating glottis * biomechanics of voice Subject RIV: BI - Acoustics Impact factor: 0.834, year: 2012 http://www.hindawi.com/journals/jam/2012/545120/
A hierarchical approach for simulating northern forest dynamics
Don C. Bragg; David W. Roberts; Thomas R. Crow
2004-01-01
Complexity in ecological systems has challenged forest simulation modelers for years, resulting in a number of approaches with varying degrees of success. Arguments in favor of hierarchical modeling are made, especially for considering a complex environmental issue like widespread eastern hemlock regeneration failure. We present the philosophy and basic framework for...
SEAS: A simulated evolution approach for analog circuit synthesis
Ning, Zhen-Qiu; Ning, Zhen-Qiu; Mouthaan, A.J.; Wallinga, Hans
1991-01-01
The authors present a simulated evolution approach for analog circuit synthesis based on an analogy with the natural selection process in biological environments and on the iterative improvements in solving engineering problems. A prototype framework based on this idea, called SEAS, has been
Pedagogical Approaches to Teaching with Computer Simulations in Science Education
Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael
2013-01-01
For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is
Hierarchical Approach to 'Atomistic' 3-D MOSFET Simulation
Asenov, Asen; Brown, Andrew R.; Davies, John H.; Saini, Subhash
1999-01-01
We present a hierarchical approach to the 'atomistic' simulation of aggressively scaled sub-0.1 micron MOSFET's. These devices are so small that their characteristics depend on the precise location of dopant atoms within them, not just on their average density. A full-scale three-dimensional drift-diffusion atomistic simulation approach is first described and used to verify more economical, but restricted, options. To reduce processor time and memory requirements at high drain voltage, we have developed a self-consistent option based on a solution of the current continuity equation restricted to a thin slab of the channel. This is coupled to the solution of the Poisson equation in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison to the full self-consistent solution. At low drain voltage, a single solution of the nonlinear Poisson equation is sufficient to extract the current with satisfactory accuracy. In this case, the current is calculated by solving the current continuity equation in a drift approximation only, also in a thin slab containing the MOSFET channel. The regions of applicability for the different components of this hierarchical approach are illustrated in example simulations covering the random dopant-induced threshold voltage fluctuations, threshold voltage lowering, threshold voltage asymmetry, and drain current fluctuations.
Simulation study of the aerosol information content in OMI spectral reflectance measurements
Directory of Open Access Journals (Sweden)
B. Veihelmann
2007-06-01
Full Text Available The Ozone Monitoring Instrument (OMI is an imaging UV-VIS solar backscatter spectrometer and is designed and used primarily to retrieve trace gases like O_{3} and NO_{2} from the measured Earth reflectance spectrum in the UV-visible (270–500 nm. However, also aerosols are an important science target of OMI. The multi-wavelength algorithm is used to retrieve aerosol parameters from OMI spectral reflectance measurements in up to 20 wavelength bands. A Principal Component Analysis (PCA is performed to quantify the information content of OMI reflectance measurements on aerosols and to assess the capability of the multi-wavelength algorithm to discern various aerosol types. This analysis is applied to synthetic reflectance measurements for desert dust, biomass burning aerosols, and weakly absorbing anthropogenic aerosol with a variety of aerosol optical thicknesses, aerosol layer altitudes, refractive indices and size distributions. The range of aerosol parameters considered covers the natural variability of tropospheric aerosols. This theoretical analysis is performed for a large number of scenarios with various geometries and surface albedo spectra for ocean, soil and vegetation. When the surface albedo spectrum is accurately known and clouds are absent, OMI reflectance measurements have 2 to 4 degrees of freedom that can be attributed to aerosol parameters. This information content depends on the observation geometry and the surface albedo spectrum. An additional wavelength band is evaluated, that comprises the O_{2}-O_{2} absorption band at a wavelength of 477 nm. It is found that this wavelength band adds significantly more information than any other individual band.
The simulation approach to lipid-protein interactions.
Paramo, Teresa; Garzón, Diana; Holdbrook, Daniel A; Khalid, Syma; Bond, Peter J
2013-01-01
The interactions between lipids and proteins are crucial for a range of biological processes, from the folding and stability of membrane proteins to signaling and metabolism facilitated by lipid-binding proteins. However, high-resolution structural details concerning functional lipid/protein interactions are scarce due to barriers in both experimental isolation of native lipid-bound complexes and subsequent biophysical characterization. The molecular dynamics (MD) simulation approach provides a means to complement available structural data, yielding dynamic, structural, and thermodynamic data for a protein embedded within a physiologically realistic, modelled lipid environment. In this chapter, we provide a guide to current methods for setting up and running simulations of membrane proteins and soluble, lipid-binding proteins, using standard atomistically detailed representations, as well as simplified, coarse-grained models. In addition, we outline recent studies that illustrate the power of the simulation approach in the context of biologically relevant lipid/protein interactions.
Li, Lianfu; Du, Zengfeng; Zhang, Xin; Xi, Shichuan; Wang, Bing; Luan, Zhendong; Lian, Chao; Yan, Jun
2018-01-01
Deep-sea carbon dioxide (CO 2 ) plays a significant role in the global carbon cycle and directly affects the living environment of marine organisms. In situ Raman detection technology is an effective approach to study the behavior of deep-sea CO 2 . However, the Raman spectral characteristics of CO 2 can be affected by the environment, thus restricting the phase identification and quantitative analysis of CO 2 . In order to study the Raman spectral characteristics of CO 2 in extreme environments (up to 300 ℃ and 30 MPa), which cover most regions of hydrothermal vents and cold seeps around the world, a deep-sea extreme environment simulator was developed. The Raman spectra of CO 2 in different phases were obtained with Raman insertion probe (RiP) system, which was also used in in situ Raman detection in the deep sea carried by remotely operated vehicle (ROV) "Faxian". The Raman frequency shifts and bandwidths of gaseous, liquid, solid, and supercritical CO 2 and the CO 2 -H 2 O system were determined with the simulator. In our experiments (0-300 ℃ and 0-30 MPa), the peak positions of the symmetric stretching modes of gaseous CO 2, liquid CO 2 , and supercritical CO 2 shift approximately 0.6 cm -1 (1387.8-1388.4 cm -1 ), 0.7 cm -1 (1385.5-1386.2 cm -1 ), and 2.5 cm -1 (1385.7-1388.2 cm -1 ), and those of the bending modes shift about 1.0 cm -1 (1284.7-1285.7 cm -1 ), 1.9 cm -1 (1280.1-1282.0 cm -1 ), and 4.4 cm -1 (1281.0-1285.4 cm -1 ), respectively. The Raman spectral characteristics of the CO 2 -H 2 O system were also studied under the same conditions. The peak positions of dissolved CO 2 varied approximately 4.5 cm -1 (1282.5-1287.0 cm -1 ) and 2.4 cm -1 (1274.4-1276.8 cm -1 ) for each peak. In comparison with our experiment results, the phases of CO 2 in extreme conditions (0-3000 m and 0-300 ℃) can be identified with the Raman spectra collected in situ. This qualitative research on CO 2 can also support the
A shell approach for fibrous reinforcement forming simulations
Liang, B.; Colmars, J.; Boisse, P.
2018-05-01
Because of the slippage between fibers, the basic assumptions of classical plate and shell theories are not verified by fiber reinforcement during a forming. However, simulations of reinforcement forming use shell finite elements when wrinkles development is important. A shell formulation is proposed for the forming simulations of continuous fiber reinforcements. The large tensile stiffness leads to the quasi inextensibility in the fiber directions. The fiber bending stiffness determines the curvature of the reinforcement. The calculation of tensile and bending virtual works are based on the precise geometry of the single fiber. Simulations and experiments are compared for different reinforcements. It is shown that the proposed fibrous shell approach not only correctly simulates the deflections but also the rotations of the through thickness material normals.
Spectral line shape simulation for electron stark-broadening of ion emitters in plasmas
International Nuclear Information System (INIS)
Dufour, Emmanuelle; Calisti, Annette; Talin, Bernard; Gigosos, Marco A.; Gonzalez, Manuel A.; Dufty, Jim W.
2002-01-01
Electron broadening for ions in plasmas is investigated in the framework of a simplified semi-classical model involving an ionic emitter imbedded in an electron gas. A regularized Coulomb potential that removes the divergence at short distances is postulated for the ion-electron interaction. Line shape simulations based on Molecular Dynamics for the ion impurity and the electrons, accounting for all the correlations, are reported. Comparisons with line shapes obtained with a quasi-particle model show expected correlation effects. Through an analysis of the results with the line shape code PPP, it is inferred that the correlation effect results mainly from the microfield dynamic properties
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work
Software as a service approach to sensor simulation software deployment
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
The spectral element approach for the solution of neutron transport problems
International Nuclear Information System (INIS)
Barbarino, A.; Dulla, S.; Ravetto, P.; Mund, E.H.
2011-01-01
In this paper a possible application of the Spectral Element Method to neutron transport problems is presented. The basic features of the numerical scheme on the one-dimensional diffusion equation are illustrated. Then, the AN model for neutron transport is introduced, and the basic steps for the construction of a bi-dimensional solver are described. The AN equations are chosen for their structure, involving a system of coupled elliptic-type equations. Some calculations are carried out on typical benchmark problems and results are compared with the Finite Element Method, in order to evaluate their performances. (author)
A spectral approach to compute the mean performance measures of the queue with low-order BMAP input
Directory of Open Access Journals (Sweden)
Ho Woo Lee
2003-01-01
Full Text Available This paper targets engineers and practitioners who want a simple procedure to compute the mean performance measures of the Batch Markovian Arrival process (BMAP/G/1 queueing system when the parameter matrices order is very low. We develop a set of system equations and derive the vector generating function of the queue length. Starting from the generating function, we propose a spectral approach that can be understandable to those who have basic knowledge of M/G/1 queues and eigenvalue algebra.
Directory of Open Access Journals (Sweden)
Boris Jesús Goenaga
2017-01-01
Full Text Available The pavement roughness is the main variable that produces the vertical excitation in vehicles. Pavement profiles are the main determinant of (i discomfort perception on users and (ii dynamic loads generated at the tire-pavement interface, hence its evaluation constitutes an essential step on a Pavement Management System. The present document evaluates two specific techniques used to simulate pavement profiles; these are the shaping filter and the sinusoidal approach, both based on the Power Spectral Density. Pavement roughness was evaluated using the International Roughness Index (IRI, which represents the most used index to characterize longitudinal road profiles. Appropriate parameters were defined in the simulation process to obtain pavement profiles with specific ranges of IRI values using both simulation techniques. The results suggest that using a sinusoidal approach one can generate random profiles with IRI values that are representative of different road types, therefore, one could generate a profile for a paved or an unpaved road, representing all the proposed categories defined by ISO 8608 standard. On the other hand, to obtain similar results using the shaping filter approximation a modification in the simulation parameters is necessary. The new proposed values allow one to generate pavement profiles with high levels of roughness, covering a wider range of surface types. Finally, the results of the current investigation could be used to further improve our understanding on the effect of pavement roughness on tire pavement interaction. The evaluated methodologies could be used to generate random profiles with specific levels of roughness to assess its effect on dynamic loads generated at the tire-pavement interface and user’s perception of road condition.
Chao, Guo-Shan; Sung, Kung-Bin
2010-02-01
Backscattered light spectra have been used to extract size distribution of cell nuclei in epithelial tissues for noninvasive detection of precancerous lesions. In existing experimental studies, size estimation is achieved by assuming nuclei as homogeneous spheres or spheroids and fitting the measured data with models based on Mie theory. However, the validity of simplifying nuclei as homogeneous spheres has not been thoroughly examined. In this study, we investigate the spectral characteristics of backscattering from models of spheroidal nuclei under plane wave illumination using three-dimensional finite-difference time-domain (FDTD) simulation. A modulated Gaussian pulse is used to obtain wavelength dependent scattering intensity with a single FDTD run. The simulated model of nuclei consists of a nucleolus and randomly distributed chromatin condensation in homogeneous cytoplasm and nucleoplasm. The results show that backscattering spectra from spheroidal nuclei have similar oscillating patterns to those from homogeneous spheres with the diameter equal to the projective length of the spheroidal nucleus along the propagation direction. The strength of backscattering is enhanced in heterogeneous spheroids as compared to homogeneous spheroids. The degree of which backscattering spectra of heterogeneous nuclei deviate from Mie theory is highly dependent on the distribution of chromatin/nucleolus but not sensitive to nucleolar size, refractive index fluctuation or chromatin density.
A New Statistical Approach to the Optical Spectral Variability in Blazars
Directory of Open Access Journals (Sweden)
Jose A. Acosta-Pulido
2016-12-01
Full Text Available We present a spectral variability study of a sample of about 25 bright blazars, based on optical spectroscopy. Observations cover the period from the end of 2008 to mid 2015, with an approximately monthly cadence. Emission lines have been identified and measured in the spectra, which permits us to classify the sources into BL Lac-type or FSRQs, according to the commonly used EW limit. We have obtained synthetic photometry and produced colour-magnitude diagrams which show different trends associated with the object classes: generally, BL Lacs tend to become bluer when brighter and FSRQs become redder when brighter, although several objects exhibit both trends, depending on brightness. We have also applied a pattern recognition algorithm to obtain the minimum number of physical components which can explain the variability of the optical spectrum. We have used NMF (Non-Negative Matrix Factorization instead of PCA (Principal Component Analysis to avoid un-realistic negative components. For most targets we found that 2 or 3 meta-components are enough to explain the observed spectral variability.
Marble, Elizabeth
1996-01-01
Hypersonic spacecraft reentering the earth's atmosphere encounter extreme heat due to atmospheric friction. Thermal Protection System (TPS) materials shield the craft from this searing heat, which can reach temperatures of 2900 F. Various thermophysical and optical properties of TPS materials are tested at the Johnson Space Center Atmospheric Reentry Materials and Structures Evaluation Facility, which has the capability to simulate critical environmental conditions associated with entry into the earth's atmosphere. Emissivity is an optical property that determines how well a material will reradiate incident heat back into the atmosphere upon reentry, thus protecting the spacecraft from the intense frictional heat. This report describes a method of measuring TPS emissivities using the SR5000 Scanning Spectroradiometer, and includes system characteristics, sample data, and operational procedures developed for arc-jet applications.
Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa
2011-08-01
In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.
SPINET: A Parallel Computing Approach to Spine Simulations
Directory of Open Access Journals (Sweden)
Peter G. Kropf
1996-01-01
Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.
Beardsell, Guillaume; Dufresne, Louis; Dumas, Guy
2016-09-01
This paper aims to shed further light on the viscous reconnection phenomenon. To this end, we propose a robust and efficient method in order to quantify the degree of reconnection of two vortex tubes. This method is used to compare the evolutions of two simple initial vortex configurations: orthogonal and antiparallel. For the antiparallel configuration, the proposed method is compared with alternative estimators and it is found to improve accuracy since it can account properly for the formation of looping structures inside the domain. This observation being new, the physical mechanism for the formation of those looping structures is discussed. For the orthogonal configuration, we report results from simulations that were performed at a much higher vortex Reynolds number (ReΓ ≡ circulation/viscosity = 104) and finer resolution (N3 = 10243) than previously presented in the literature. The incompressible Navier-stokes equations are solved directly (Direct Numerical Simulation or DNS) using a Fourier pseudospectral algorithm with triply periodic boundary conditions. The associated zero-circulation constraint is circumvented by solving the governing equations in a proper rotating frame of reference. Using ideas similar to those behind our method to compute the degree of reconnection, we split the vorticity field into its reconnected and non-reconnected parts, which allows to create insightful visualizations of the evolving vortex topology. It also allows to detect regions in the vorticity field that are neither reconnected nor non-reconnected and thus must be associated to internal looping structures. Finally, the Reynolds number dependence of the reconnection time scale Trec is investigated in the range 500 ≤ ReΓ ≤ 10 000. For both initial configurations, the scaling is generally found to vary continuously as ReΓ is increased from T rec ˜ R eΓ - 1 to T rec ˜ R eΓ - 1 / 2 , thus providing quantitative support for previous claims that the reconnection
Sun, Jicheng; Gao, Xinliang; Lu, Quanming; Chen, Lunjin; Liu, Xu; Wang, Xueyi; Tao, Xin; Wang, Shui
2017-05-01
In this paper, we perform a 1-D particle-in-cell (PIC) simulation model consisting of three species, cold electrons, cold ions, and energetic ion ring, to investigate spectral structures of magnetosonic waves excited by ring distribution protons in the Earth's magnetosphere, and dynamics of charged particles during the excitation of magnetosonic waves. As the wave normal angle decreases, the spectral range of excited magnetosonic waves becomes broader with upper frequency limit extending beyond the lower hybrid resonant frequency, and the discrete spectra tends to merge into a continuous one. This dependence on wave normal angle is consistent with the linear theory. The effects of magnetosonic waves on the background cold plasma populations also vary with wave normal angle. For exactly perpendicular magnetosonic waves (parallel wave number k|| = 0), there is no energization in the parallel direction for both background cold protons and electrons due to the negligible fluctuating electric field component in the parallel direction. In contrast, the perpendicular energization of background plasmas is rather significant, where cold protons follow unmagnetized motion while cold electrons follow drift motion due to wave electric fields. For magnetosonic waves with a finite k||, there exists a nonnegligible parallel fluctuating electric field, leading to a significant and rapid energization in the parallel direction for cold electrons. These cold electrons can also be efficiently energized in the perpendicular direction due to the interaction with the magnetosonic wave fields in the perpendicular direction. However, cold protons can be only heated in the perpendicular direction, which is likely caused by the higher-order resonances with magnetosonic waves. The potential impacts of magnetosonic waves on the energization of the background cold plasmas in the Earth's inner magnetosphere are also discussed in this paper.
Bracken, Colm P.; Lightfoot, John; O'Sullivan, Creidhe; Murphy, J. Anthony; Donohoe, Anthony; Savini, Giorgio; Juanola-Parramon, Roser; The Fisica Consortium, On Behalf Of
2018-01-01
In the absence of 50-m class space-based observatories, subarcsecond astronomy spanning the full far-infrared wavelength range will require space-based long-baseline interferometry. The long baselines of up to tens of meters are necessary to achieve subarcsecond resolution demanded by science goals. Also, practical observing times command a field of view toward an arcminute (1‧) or so, not achievable with a single on-axis coherent detector. This paper is concerned with an application of an end-to-end instrument simulator PyFIInS, developed as part of the FISICA project under funding from the European Commission's seventh Framework Programme for Research and Technological Development (FP7). Predicted results of wide field of view spatio-spectral interferometry through simulations of a long-baseline, double-Fourier, far-infrared interferometer concept are presented and analyzed. It is shown how such an interferometer, illuminated by a multimode detector can recover a large field of view at subarcsecond angular resolution, resulting in similar image quality as that achieved by illuminating the system with an array of coherent detectors. Through careful analysis, the importance of accounting for the correct number of higher-order optical modes is demonstrated, as well as accounting for both orthogonal polarizations. Given that it is very difficult to manufacture waveguide and feed structures at sub-mm wavelengths, the larger multimode design is recommended over the array of smaller single mode detectors. A brief note is provided in the conclusion of this paper addressing a more elegant solution to modeling far-infrared interferometers, which holds promise for improving the computational efficiency of the simulations presented here.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Use case driven approach to develop simulation model for PCS of APR1400 simulator
International Nuclear Information System (INIS)
Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang; Byung Hwan, Bae
2006-01-01
The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and the resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)
Foodsheds in Virtual Water Flow Networks: A Spectral Graph Theory Approach
Directory of Open Access Journals (Sweden)
Nina Kshetry
2017-06-01
Full Text Available A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food–water–energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e., water that is virtually embedded in food. In particular, we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.
Energy Technology Data Exchange (ETDEWEB)
Hooper, Sean D.; Anderson, Iain J; Pati, Amrita; Dalevi, Daniel; Mavromatis, Konstantinos; Kyrpides, Nikos C
2009-01-01
In order to simplify and meaningfully categorize large sets of protein sequence data, it is commonplace to cluster proteins based on the similarity of those sequences. However, it quickly becomes clear that the sequence flexibility allowed a given protein varies significantly among different protein families. The degree to which sequences are conserved not only differs for each protein family, but also is affected by the phylogenetic divergence of the source organisms. Clustering techniques that use similarity thresholds for protein families do not always allow for these variations and thus cannot be confidently used for applications such as automated annotation and phylogenetic profiling. In this work, we applied a spectral bipartitioning technique to all proteins from 53 archaeal genomes. Comparisons between different taxonomic levels allowed us to study the effects of phylogenetic distances on cluster structure. Likewise, by associating functional annotations and phenotypic metadata with each protein, we could compare our protein similarity clusters with both protein function and associated phenotype. Our clusters can be analyzed graphically and interactively online.
New Approach for Snow Cover Detection through Spectral Pattern Recognition with MODIS Data
Directory of Open Access Journals (Sweden)
Kyeong-Sang Lee
2017-01-01
Full Text Available Snow cover plays an important role in climate and hydrology, at both global and regional scales. Most previous studies have used static threshold techniques to detect snow cover, which can lead to errors such as misclassification of snow and clouds, because the reflectance of snow cover exhibits variability and is affected by several factors. Therefore, we present a simple new algorithm for mapping snow cover from Moderate Resolution Imaging Spectroradiometer (MODIS data using dynamic wavelength warping (DWW, which is based on dynamic time warping (DTW. DTW is a pattern recognition technique that is widely used in various fields such as human action recognition, anomaly detection, and clustering. Before performing DWW, we constructed 49 snow reflectance spectral libraries as reference data for various solar zenith angle and digital elevation model conditions using approximately 1.6 million sampled data. To verify the algorithm, we compared our results with the MODIS swath snow cover product (MOD10_L2. Producer’s accuracy, user’s accuracy, and overall accuracy values were 92.92%, 78.41%, and 92.24%, respectively, indicating good overall classification accuracy. The proposed algorithm is more useful for discriminating between snow cover and clouds than threshold techniques in some areas, such as those with a high viewing zenith angle.
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-01-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
International Nuclear Information System (INIS)
Garg, R; Fahmi, N.; Singh, R.V.
2007-01-01
The Schiff bases, 3-(indolin-2-one)hydrazinecarbothioamide, 3-(indolin-2-one)hydrazinecarboxamide, 5,6-dimethyl-3-(indolin-2-one)hydrazinecarbothioamide, and 5,6-dimethyl-3-(indolin-2-one)hydrazinecarboxamide, have been synthesized by the condensation of 1H-indol-2,3-dione and 5,6-dimethyl-1H-indol-2,3-dione with the corresponding hydrazinecarbothioamide and hydrazinecarboxamide, respectively. The complexes of oxovanadium and ligands have been characterized by elemental analyses, melting points, conductance measurements, molecular weight determinations, and IR, 1 H NMR and UV spectral studies. These studies showed that the ligands coordinated to the oxovanadium in a monobasic bidentate fashion through oxygen or sulfur and the nitrogen donor system. Thus, penta- and hexa coordinated environment around the vanadium atom has been proposed. All the complexes and their parent organic moieties have been screened for their biological activity on several pathogenic fungi and bacteria and were found to possess appreciable fungicidal and bactericidal properties [ru
The delta-Sobolev approach for modeling solar spectral irradiance and radiance
International Nuclear Information System (INIS)
Xiang, Xuwu.
1990-01-01
The development and evaluation of a solar radiation model is reported, which gives irradiance and radiance results at the bottom and top of an atmosphere of specified optical depth for each of 145 spectral intervals from 0.29 to 4.05 microns. Absorption by water vapor, aerosols, ozone, and uniformly mixed gases; scattering by molecules and aerosols; and non-Lambertian surface reflectance are included in the model. For solving the radiative transfer equation, an innovative delta-Sobolev method is developed. It applies a delta-function modification to the conventional Sobolev solutions in a way analogous to the delta-Eddington method. The irradiance solution by the delta-Sobolev method turns out to be mathematically identical to the delta-Eddington approximation. The radiance solution by the delta-Sobolov method provides a convenient way to obtain the directional distribution pattern of the radiation transfer field, a feature unable to be obtained by most commonly used approximation methods. Such radiance solutions are also especially useful in models for satellite remote sensing. The model is tested against the rigorous Dave model, which solves the radiation transfer problem by the spherical harmonic method, an accurate but very time consuming process. Good agreement between the current model results and those of Dave's model are observed. The advantages of the delta-Sobolev model are simplicity, reasonable accuracy and capability for implementation on a minicomputer or microcomputer
A novel approach for characterizing broad-band radio spectral energy distributions
Harvey, V. M.; Franzen, T.; Morgan, J.; Seymour, N.
2018-05-01
We present a new broad-band radio frequency catalogue across 0.12 GHz ≤ ν ≤ 20 GHz created by combining data from the Murchison Widefield Array Commissioning Survey, the Australia Telescope 20 GHz survey, and the literature. Our catalogue consists of 1285 sources limited by S20 GHz > 40 mJy at 5σ, and contains flux density measurements (or estimates) and uncertainties at 0.074, 0.080, 0.119, 0.150, 0.180, 0.408, 0.843, 1.4, 4.8, 8.6, and 20 GHz. We fit a second-order polynomial in log-log space to the spectral energy distributions of all these sources in order to characterize their broad-band emission. For the 994 sources that are well described by a linear or quadratic model we present a new diagnostic plot arranging sources by the linear and curvature terms. We demonstrate the advantages of such a plot over the traditional radio colour-colour diagram. We also present astrophysical descriptions of the sources found in each segment of this new parameter space and discuss the utility of these plots in the upcoming era of large area, deep, broad-band radio surveys.
Wang, Wang; Li, Xue; Wang, Qiuying; Zhu, Xixi; Zhang, Qingyan; Du, Linfang
2018-01-01
CP43 is closely associated with the photosystem II and exists the plant thylakoid membranes. The acidic pH-induced structural changes had been investigated by fluorescence spectrum, ANS spectrum, RLS spectrum, energy transfer experiment, acrylamide fluorescence quenching assay and MD simulation. The fluorescence spectrum indicated that the structural changes in acidic pH-induced process were a four-state model, which was nature state (N), partial unfolding state (PU), refolding state (R), and molten-globule state (M), respectively. Analysis of ANS spectrum illustrated that inner hydrophobic core exposed partially to surface below pH 2.0 and inferred also that the molten-globule state existed. The RLS spectrum showed the aggregation of apo-CP43 around the pI (pH 4.5-4.0). The alterations of apo-CP43 secondary structure with different acidic treatments were confirmed by FTIR spectrum. The energy transfer experiment and quenching research demonstrated structural change at pH 4.0 was loosest. The RMSF suggested two terminals played an important function in acidic denaturation process. The distance of two terminals shown slight difference in acidic pH-induced process during the unfolding process, both N-terminal and C-terminal occupied the dominant role. However, the N-terminal accounted for the main part in the refolding process. All kinds of SASA values corresponded to spectral results. The tertiary and secondary structure by MD simulation indicated that the part transmembrane α-helix was destroyed at low pH.
PSYCHE: An Object-Oriented Approach to Simulating Medical Education
Mullen, Jamie A.
1990-01-01
Traditional approaches to computer-assisted instruction (CAI) do not provide realistic simulations of medical education, in part because they do not utilize heterogeneous knowledge bases for their source of domain knowledge. PSYCHE, a CAI program designed to teach hypothetico-deductive psychiatric decision-making to medical students, uses an object-oriented implementation of an intelligent tutoring system (ITS) to model the student, domain expert, and tutor. It models the transactions between the participants in complex transaction chains, and uses heterogeneous knowledge bases to represent both domain and procedural knowledge in clinical medicine. This object-oriented approach is a flexible and dynamic approach to modeling, and represents a potentially valuable tool for the investigation of medical education and decision-making.
Energy Technology Data Exchange (ETDEWEB)
Matsuki, Yoh; Akutsu, Hideo; Fujiwara, Toshimichi [Osaka University, Institute for Protein Research (Japan)], E-mail: tfjwr@protein.osaka-u.ac.jp
2007-08-15
We describe an approach for the signal assignment and structural analysis with a suite of two-dimensional {sup 13}C-{sup 13}C magic-angle-spinning solid-state NMR spectra of uniformly {sup 13}C-labeled peptides and proteins. We directly fit the calculated spectra to experimental ones by simulated annealing in restrained molecular dynamics program CNS as a function of atomic coordinates. The spectra are calculated from the conformation dependent chemical shift obtained with SHIFTX and the cross-peak intensities computed for recoupled dipolar interactions. This method was applied to a membrane-bound 14-residue peptide, mastoparan-X. The obtained C', C{sup {alpha}} and C{sup {beta}} chemical shifts agreed with those reported previously at the precisions of 0.2, 0.7 and 0.4 ppm, respectively. This spectral fitting program also provides backbone dihedral angles with a precision of about 50 deg. from the spectra even with resonance overlaps. The restraints on the angles were improved by applying protein database program TALOS to the obtained chemical shifts. The peptide structure provided by these restraints was consistent with the reported structure at the backbone RMSD of about 1 A.
Numerical simulation in steam injection process by a mechanistic approach
Energy Technology Data Exchange (ETDEWEB)
De Souza, J.C.Jr.; Campos, W.; Lopes, D.; Moura, L.S.S. [Petrobras, Rio de Janeiro (Brazil)
2008-10-15
Steam injection is a common thermal recovery method used in very viscous oil reservoirs. The method involves the injection of heat to reduce viscosity and mobilize oil. A steam generation and injection system consists primarily of a steam source, distribution lines, injection wells and a discarding tank. In order to optimize injection and improve the oil recovery factor, one must determine the parameters of steam flow such as pressure, temperature and steam quality. This study focused on developing a unified mathematical model by means of a mechanistic approach for two-phase steam flow in pipelines and wells. The hydrodynamic and heat transfer mechanistic model was implemented in a computer simulator to model the parameters of steam injection while trying to avoid the use of empirical correlations. A marching algorithm was used to determine the distribution of pressure and temperature along the pipelines and wellbores. The mathematical model for steam flow in injection systems, developed by a mechanistic approach (VapMec) performed well when the simulated values of pressures and temperatures were compared with the values measured during field tests. The newly developed VapMec model was incorporated in the LinVap-3 simulator that constitutes an engineering supporting tool for steam injection wells operated by Petrobras. 23 refs., 7 tabs., 6 figs.
Directory of Open Access Journals (Sweden)
L. Caponi
2017-06-01
Full Text Available This paper presents new laboratory measurements of the mass absorption efficiency (MAE between 375 and 850 nm for 12 individual samples of mineral dust from different source areas worldwide and in two size classes: PM10. 6 (mass fraction of particles of aerodynamic diameter lower than 10.6 µm and PM2. 5 (mass fraction of particles of aerodynamic diameter lower than 2.5 µm. The experiments were performed in the CESAM simulation chamber using mineral dust generated from natural parent soils and included optical and gravimetric analyses. The results show that the MAE values are lower for the PM10. 6 mass fraction (range 37–135 × 10−3 m2 g−1 at 375 nm than for the PM2. 5 (range 95–711 × 10−3 m2 g−1 at 375 nm and decrease with increasing wavelength as λ−AAE, where the Ångström absorption exponent (AAE averages between 3.3 and 3.5, regardless of size. The size independence of AAE suggests that, for a given size distribution, the dust composition did not vary with size for this set of samples. Because of its high atmospheric concentration, light absorption by mineral dust can be competitive with black and brown carbon even during atmospheric transport over heavy polluted regions, when dust concentrations are significantly lower than at emission. The AAE values of mineral dust are higher than for black carbon (∼ 1 but in the same range as light-absorbing organic (brown carbon. As a result, depending on the environment, there can be some ambiguity in apportioning the aerosol absorption optical depth (AAOD based on spectral dependence, which is relevant to the development of remote sensing of light-absorbing aerosols and their assimilation in climate models. We suggest that the sample-to-sample variability in our dataset of MAE values is related to regional differences in the mineralogical composition of the parent soils. Particularly in the PM2. 5 fraction, we found a strong
International Nuclear Information System (INIS)
Laurent, Philippe; Titarchuk, Lev
2011-01-01
We present herein a theoretical study of correlations between spectral indexes of X-ray emergent spectra and mass accretion rate ( m-dot ) in black hole (BH) sources, which provide a definitive signature for BHs. It has been firmly established, using the Rossi X-ray Timing Explorer (RXTE) in numerous BH observations during hard-soft state spectral evolution, that the photon index of X-ray spectra increases when m-dot increases and, moreover, the index saturates at high values of m-dot . In this paper, we present theoretical arguments that the observationally established index saturation effect versus mass accretion rate is a signature of the bulk (converging) flow onto the BH. Also, we demonstrate that the index saturation value depends on the plasma temperature of converging flow. We self-consistently calculate the Compton cloud (CC) plasma temperature as a function of mass accretion rate using the energy balance between energy dissipation and Compton cooling. We explain the observable phenomenon, index- m-dot correlations using a Monte Carlo simulation of radiative processes in the innermost part (CC) of a BH source and we account for the Comptonization processes in the presence of thermal and bulk motions, as basic types of plasma motion. We show that, when m-dot increases, BH sources evolve to high and very soft states (HSS and VSS, respectively), in which the strong blackbody(BB)-like and steep power-law components are formed in the resulting X-ray spectrum. The simultaneous detections of these two components strongly depends on sensitivity of high-energy instruments, given that the relative contribution of the hard power-law tail in the resulting VSS spectrum can be very low, which is why, to date RXTE observations of the VSS X-ray spectrum have been characterized by the presence of the strong BB-like component only. We also predict specific patterns for high-energy e-fold (cutoff) energy (E fold ) evolution with m-dot for thermal and dynamical (bulk
Caponi, Lorenzo; Formenti, Paola; Massabó, Dario; Di Biagio, Claudia; Cazaunau, Mathieu; Pangui, Edouard; Chevaillier, Servanne; Landrot, Gautier; Andreae, Meinrat O.; Kandler, Konrad; Piketh, Stuart; Saeed, Thuraya; Seibert, Dave; Williams, Earle; Balkanski, Yves; Prati, Paolo; Doussin, Jean-François
2017-06-01
This paper presents new laboratory measurements of the mass absorption efficiency (MAE) between 375 and 850 nm for 12 individual samples of mineral dust from different source areas worldwide and in two size classes: PM10. 6 (mass fraction of particles of aerodynamic diameter lower than 10.6 µm) and PM2. 5 (mass fraction of particles of aerodynamic diameter lower than 2.5 µm). The experiments were performed in the CESAM simulation chamber using mineral dust generated from natural parent soils and included optical and gravimetric analyses. The results show that the MAE values are lower for the PM10. 6 mass fraction (range 37-135 × 10-3 m2 g-1 at 375 nm) than for the PM2. 5 (range 95-711 × 10-3 m2 g-1 at 375 nm) and decrease with increasing wavelength as λ-AAE, where the Ångström absorption exponent (AAE) averages between 3.3 and 3.5, regardless of size. The size independence of AAE suggests that, for a given size distribution, the dust composition did not vary with size for this set of samples. Because of its high atmospheric concentration, light absorption by mineral dust can be competitive with black and brown carbon even during atmospheric transport over heavy polluted regions, when dust concentrations are significantly lower than at emission. The AAE values of mineral dust are higher than for black carbon (˜ 1) but in the same range as light-absorbing organic (brown) carbon. As a result, depending on the environment, there can be some ambiguity in apportioning the aerosol absorption optical depth (AAOD) based on spectral dependence, which is relevant to the development of remote sensing of light-absorbing aerosols and their assimilation in climate models. We suggest that the sample-to-sample variability in our dataset of MAE values is related to regional differences in the mineralogical composition of the parent soils. Particularly in the PM2. 5 fraction, we found a strong linear correlation between the dust light-absorption properties and elemental
Coupled multi-physics simulation frameworks for reactor simulation: A bottom-up approach
International Nuclear Information System (INIS)
Tautges, Timothy J.; Caceres, Alvaro; Jain, Rajeev; Kim, Hong-Jun; Kraftcheck, Jason A.; Smith, Brandon M.
2011-01-01
A 'bottom-up' approach to multi-physics frameworks is described, where first common interfaces to simulation data are developed, then existing physics modules are adapted to communicate through those interfaces. Physics modules read and write data through those common interfaces, which also provide access to common simulation services like parallel IO, mesh partitioning, etc.. Multi-physics codes are assembled as a combination of physics modules, services, interface implementations, and driver code which coordinates calling these various pieces. Examples of various physics modules and services connected to this framework are given. (author)
Simulation approaches to probabilistic structural design at the component level
International Nuclear Information System (INIS)
Stancampiano, P.A.
1978-01-01
In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)
Ergonomics and simulation-based approach in improving facility layout
Abad, Jocelyn D.
2018-02-01
The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.
International Nuclear Information System (INIS)
de Jong, F.; Malfliet, R.
1991-01-01
Starting from a relativistic Lagrangian we derive a ''conserving'' approximation for the description of nuclear matter. We show this to be a nontrivial extension over the relativistic Dirac-Brueckner scheme. The saturation point of the equation of state calculated agrees very well with the empirical saturation point. The conserving character of the approach is tested by means of the Hugenholtz--van Hove theorem. We find the theorem fulfilled very well around saturation. A new value for compression modulus is derived, K=310 MeV. Also we calculate the occupation probabilities at normal nuclear matter densities by means of the spectral function. The average depletion κ of the Fermi sea is found to be κ∼0.11
Saito, Masatoshi
2007-11-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.
International Nuclear Information System (INIS)
Saito, Masatoshi
2007-01-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm 2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues
Modelling and simulating retail management practices: a first approach
Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris
2010-01-01
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...
Discrete event simulation versus conventional system reliability analysis approaches
DEFF Research Database (Denmark)
Kozine, Igor
2010-01-01
Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David
2017-03-03
Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
Optimizing nitrogen fertilizer use: Current approaches and simulation models
International Nuclear Information System (INIS)
Baethgen, W.E.
2000-01-01
Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)
Schwartz, N.; Huisman, J. A.; Furman, A.
2012-12-01
In recent years, there is a growing interest in using geophysical methods in general and spectral induced polarization (SIP) in particular as a tool to detect and monitor organic contaminants within the subsurface. The general idea of the SIP method is to inject alternating current through a soil volume and to measure the resultant potential in order to obtain the relevant soil electrical properties (e.g. complex impedance, complex conductivity/resistivity). Currently, a complete mechanistic understanding of the effect of organic contaminants on the SIP response of soil is still absent. In this work, we combine laboratory experiments with modeling to reveal the main processes affecting the SIP signature of soil contaminated with organic pollutant. In a first set of experiments, we investigate the effect of non-aqueous phase liquids (NAPL) on the complex conductivity of unsaturated porous media. Our results show that addition of NAPL to the porous media increases the real component of the soil electrical conductivity and decreases the polarization of the soil (imaginary component of the complex conductivity). Furthermore, addition of NAPL to the soil resulted in an increase of the electrical conductivity of the soil solution. Based on these results, we suggest that adsorption of NAPL to the soil surface, and exchange process between polar organic compounds in the NAPL and inorganic ions in the soil are the main processes affecting the SIP signature of the contaminated soil. To further support our hypothesis, the temporal change of the SIP signature of a soil as function of a single organic cation concentration was measured. In addition to the measurements of the soil electrical properties, we also measured the effect of the organic cation on the chemical composition of both the bulk and the surface of the soil. The results of those experiments again showed that the electrical conductivity of the soil increased with increasing contaminant concentration. In addition
Cross spectral, active and passive approach to face recognition for improved performance
Grudzien, A.; Kowalski, M.; Szustakowski, M.
2017-08-01
Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.
A new approach to the spectral analysis of liquid membrane oscillators by Gábor transformation
DEFF Research Database (Denmark)
Płocharska-Jankowska, E.; Szpakowska, M.; Mátéfi-Tempfli, Stefan
2006-01-01
Liquid membrane oscillators very frequently have an irregular oscillatory behavior. Fourier transformation cannot be used for these nonstationary oscillations to establish their power spectra. This important point seems to be overlooked in the field of chemical oscillators. A new approach...... is presented here based on Gábor transformation allowing one to obtain power spectra of any kind of oscillations that can be met experimentally. The proposed Gábor analysis is applied to a liquid membrane oscillator containing a cationic surfactant. It was found that the power spectra are strongly influenced...
A unified approach for suppressing sidelobes arising in the spectral response of rugate filters
International Nuclear Information System (INIS)
Abo-Zahhad, M.; Bataineh, M.
2000-01-01
This paper suggests a universal approach to reduce the side lobes which usually appear at both sides of a stop band of a ru gate filter. Both quin tic matching layers and anodization functions are to used to improve the filter's response. The proposed technique could be used to control the ripples level by properly choosing the refractive index profile after amending it to include mat aching layers and/or modulating its profile with a slowly varying anodization (or ta perine) function. Two illustrative examples are given to demonstrate the robustness of the proposed technique. The given examples suggest that combining both effects on the index of refraction profile lead to the lowest possible ripple level. A multichannel filter response is obtained by wavelet cons traction of the refractive index profile with potential applications in multimode lasers and wavelength division multiple xin networks. The obtained results demonstrate the applicability of the adopted approach to design ripple free ru gate filters. The extension to stack filters and other wave guiding structures are also visible. (authors). 14 refs., 8 figs
Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources
International Nuclear Information System (INIS)
Sisniega, A.; Vaquero, J. J.; Desco, M.
2014-01-01
Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modified to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in
Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources
Energy Technology Data Exchange (ETDEWEB)
Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid ES28911 (Spain); Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007 (Spain); Desco, M. [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid ES28911 (Spain); Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007 (Spain); Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid ES28029 (Spain)
2014-01-15
Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modified to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in
An approach to high speed ship ride quality simulation
Malone, W. L.; Vickery, J. M.
1975-01-01
The high speeds attained by certain advanced surface ships result in a spectrum of motion which is higher in frequency than that of conventional ships. This fact along with the inclusion of advanced ride control features in the design of these ships resulted in an increased awareness of the need for ride criteria. Such criteria can be developed using data from actual ship operations in varied sea states or from clinical laboratory experiments. A third approach is to simulate ship conditions using measured or calculated ship motion data. Recent simulations have used data derived from a math model of Surface Effect Ship (SES) motion. The model in turn is based on equations of motion which have been refined with data from scale models and SES of up to 101 600-kg (100-ton) displacement. Employment of broad band motion emphasizes the use of the simulators as a design tool to evaluate a given ship configuration in several operational situations and also serves to provide data as to the overall effect of a given motion on crew performance and physiological status.
A Fault Sample Simulation Approach for Virtual Testability Demonstration Test
Institute of Scientific and Technical Information of China (English)
ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng
2012-01-01
Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.
Conditional flood frequency and catchment state: a simulation approach
Brettschneider, Marco; Bourgin, François; Merz, Bruno; Andreassian, Vazken; Blaquiere, Simon
2017-04-01
Catchments have memory and the conditional flood frequency distribution for a time period ahead can be seen as non-stationary: it varies with the catchment state and climatic factors. From a risk management perspective, understanding the link of conditional flood frequency to catchment state is a key to anticipate potential periods of higher flood risk. Here, we adopt a simulation approach to explore the link between flood frequency obtained by continuous rainfall-runoff simulation and the initial state of the catchment. The simulation chain is based on i) a three state rainfall generator applied at the catchment scale, whose parameters are estimated for each month, and ii) the GR4J lumped rainfall-runoff model, whose parameters are calibrated with all available data. For each month, a large number of stochastic realizations of the continuous rainfall generator for the next 12 months are used as inputs for the GR4J model in order to obtain a large number of stochastic realizations for the next 12 months. This process is then repeated for 50 different initial states of the soil moisture reservoir of the GR4J model and for all the catchments. Thus, 50 different conditional flood frequency curves are obtained for the 50 different initial catchment states. We will present an analysis of the link between the catchment states, the period of the year and the strength of the conditioning of the flood frequency compared to the unconditional flood frequency. A large sample of diverse catchments in France will be used.
A New Approach to Monte Carlo Simulations in Statistical Physics
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Energy Technology Data Exchange (ETDEWEB)
Wang, Songlin; Matsuda, Isamu; Long, Fei; Ishii, Yoshitaka, E-mail: yishii@uic.edu [University of Illinois at Chicago, Department of Chemistry (United States)
2016-02-15
This study demonstrates a novel spectral editing technique for protein solid-state NMR (SSNMR) to simplify the spectrum drastically and to reduce the ambiguity for protein main-chain signal assignments in fast magic-angle-spinning (MAS) conditions at a wide frequency range of 40–80 kHz. The approach termed HIGHLIGHT (Wang et al., in Chem Comm 51:15055–15058, 2015) combines the reverse {sup 13}C, {sup 15}N-isotope labeling strategy and selective signal quenching using the frequency-selective REDOR pulse sequence under fast MAS. The scheme allows one to selectively observe the signals of “highlighted” labeled amino-acid residues that precede or follow unlabeled residues through selectively quenching {sup 13}CO or {sup 15}N signals for a pair of consecutively labeled residues by recoupling {sup 13}CO–{sup 15}N dipolar couplings. Our numerical simulation results showed that the scheme yielded only ∼15 % loss of signals for the highlighted residues while quenching as much as ∼90 % of signals for non-highlighted residues. For lysine-reverse-labeled micro-crystalline GB1 protein, the 2D {sup 15}N/{sup 13}C{sub α} correlation and 2D {sup 13}C{sub α}/{sup 13}CO correlation SSNMR spectra by the HIGHLIGHT approach yielded signals only for six residues following and preceding the unlabeled lysine residues, respectively. The experimental dephasing curves agreed reasonably well with the corresponding simulation results for highlighted and quenched residues at spinning speeds of 40 and 60 kHz. The compatibility of the HIGHLIGHT approach with fast MAS allows for sensitivity enhancement by paramagnetic assisted data collection (PACC) and {sup 1}H detection. We also discuss how the HIGHLIGHT approach facilitates signal assignments using {sup 13}C-detected 3D SSNMR by demonstrating full sequential assignments of lysine-reverse-labeled micro-crystalline GB1 protein (∼300 nmol), for which data collection required only 11 h. The HIGHLIGHT approach offers valuable
New approach for simulating groundwater flow in discrete fracture network
Fang, H.; Zhu, J.
2017-12-01
In this study, we develop a new approach to calculate groundwater flowrate and hydraulic head distribution in two-dimensional discrete fracture network (DFN) where both laminar and turbulent flows co-exist in individual fractures. The cubic law is used to calculate hydraulic head distribution and flow behaviors in fractures where flow is laminar, while the Forchheimer's law is used to quantify turbulent flow behaviors. Reynolds number is used to distinguish flow characteristics in individual fractures. The combination of linear and non-linear equations is solved iteratively to determine flowrates in all fractures and hydraulic heads at all intersections. We examine potential errors in both flowrate and hydraulic head from the approach of uniform flow assumption. Applying the cubic law in all fractures regardless of actual flow conditions overestimates the flowrate when turbulent flow may exist while applying the Forchheimer's law indiscriminately underestimate the flowrate when laminar flows exist in the network. The contrast of apertures of large and small fractures in the DFN has significant impact on the potential errors of using only the cubic law or the Forchheimer's law. Both the cubic law and Forchheimer's law simulate similar hydraulic head distributions as the main difference between these two approaches lies in predicting different flowrates. Fracture irregularity does not significantly affect the potential errors from using only the cubic law or the Forchheimer's law if network configuration remains similar. Relative density of fractures does not significantly affect the relative performance of the cubic law and Forchheimer's law.
Numerical Simulation of Incremental Sheet Forming by Simplified Approach
Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.
2011-01-01
The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.
[New approaches in pharmacology: numerical modelling and simulation].
Boissel, Jean-Pierre; Cucherat, Michel; Nony, Patrice; Dronne, Marie-Aimée; Kassaï, Behrouz; Chabaud, Sylvie
2005-01-01
The complexity of pathophysiological mechanisms is beyond the capabilities of traditional approaches. Many of the decision-making problems in public health, such as initiating mass screening, are complex. Progress in genomics and proteomics, and the resulting extraordinary increase in knowledge with regard to interactions between gene expression, the environment and behaviour, the customisation of risk factors and the need to combine therapies that individually have minimal though well documented efficacy, has led doctors to raise new questions: how to optimise choice and the application of therapeutic strategies at the individual rather than the group level, while taking into account all the available evidence? This is essentially a problem of complexity with dimensions similar to the previous ones: multiple parameters with nonlinear relationships between them, varying time scales that cannot be ignored etc. Numerical modelling and simulation (in silico investigations) have the potential to meet these challenges. Such approaches are considered in drug innovation and development. They require a multidisciplinary approach, and this will involve modification of the way research in pharmacology is conducted.
Amp: A modular approach to machine learning in atomistic simulations
Khorshidi, Alireza; Peterson, Andrew A.
2016-10-01
Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which
Energy Technology Data Exchange (ETDEWEB)
Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.
2017-06-15
Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.
A high-fidelity approach towards simulation of pool boiling
Energy Technology Data Exchange (ETDEWEB)
Yazdani, Miad; Radcliff, Thomas; Soteriou, Marios; Alahyari, Abbas A. [United Technologies Research Center, East Hartford, Connecticut 06108 (United States)
2016-01-15
A novel numerical approach is developed to simulate the multiscale problem of pool-boiling phase change. The particular focus is to develop a simulation technique that is capable of predicting the heat transfer and hydrodynamic characteristics of nucleate boiling and the transition to critical heat flux on surfaces of arbitrary shape and roughness distribution addressing a critical need to design enhanced boiling heat transfer surfaces. The macro-scale of the phase change and bubble dynamics is addressed through employing off-the-shelf Computational Fluid Dynamics (CFD) methods for interface tracking and interphase mass and energy transfer. The micro-scale of the microlayer, which forms at early stage of bubble nucleation near the wall, is resolved through asymptotic approximation of the thin-film theory which provides a closed-form solution for the distribution of the micro-layer and its influence on the evaporation process. In addition, the sub-grid surface roughness is represented stochastically through probabilistic density functions and its role in bubble nucleation and growth is then represented based on the thermodynamics of nucleation process. This combination of deterministic CFD, local approximation, and stochastic representation allows the simulation of pool boiling on any surface with known roughness and enhancement characteristics. The numerical model is validated for dynamics and hydrothermal characteristics of a single nucleated bubble on a flat surface against available literature data. In addition, the prediction of pool-boiling heat transfer coefficient is verified against experimental measurements as well as reputable correlations for various roughness distributions and different surface orientations. Finally, the model is employed to demonstrate pool-boiling phenomenon on enhanced structures with reentrance cavities and to explore the effect of enhancement feature design on thermal and hydrodynamic characteristics of these surfaces.
A high-fidelity approach towards simulation of pool boiling
International Nuclear Information System (INIS)
Yazdani, Miad; Radcliff, Thomas; Soteriou, Marios; Alahyari, Abbas A.
2016-01-01
A novel numerical approach is developed to simulate the multiscale problem of pool-boiling phase change. The particular focus is to develop a simulation technique that is capable of predicting the heat transfer and hydrodynamic characteristics of nucleate boiling and the transition to critical heat flux on surfaces of arbitrary shape and roughness distribution addressing a critical need to design enhanced boiling heat transfer surfaces. The macro-scale of the phase change and bubble dynamics is addressed through employing off-the-shelf Computational Fluid Dynamics (CFD) methods for interface tracking and interphase mass and energy transfer. The micro-scale of the microlayer, which forms at early stage of bubble nucleation near the wall, is resolved through asymptotic approximation of the thin-film theory which provides a closed-form solution for the distribution of the micro-layer and its influence on the evaporation process. In addition, the sub-grid surface roughness is represented stochastically through probabilistic density functions and its role in bubble nucleation and growth is then represented based on the thermodynamics of nucleation process. This combination of deterministic CFD, local approximation, and stochastic representation allows the simulation of pool boiling on any surface with known roughness and enhancement characteristics. The numerical model is validated for dynamics and hydrothermal characteristics of a single nucleated bubble on a flat surface against available literature data. In addition, the prediction of pool-boiling heat transfer coefficient is verified against experimental measurements as well as reputable correlations for various roughness distributions and different surface orientations. Finally, the model is employed to demonstrate pool-boiling phenomenon on enhanced structures with reentrance cavities and to explore the effect of enhancement feature design on thermal and hydrodynamic characteristics of these surfaces
A MULTIDIMENSIONAL AND MULTIPHYSICS APPROACH TO NUCLEAR FUEL BEHAVIOR SIMULATION
Energy Technology Data Exchange (ETDEWEB)
R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau
2012-04-01
Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.
A new approach for turbulent simulations in complex geometries
Israel, Daniel M.
Historically turbulence modeling has been sharply divided into Reynolds averaged Navier-Stokes (RANS), in which all the turbulent scales of motion are modeled, and large-eddy simulation (LES), in which only a portion of the turbulent spectrum is modeled. In recent years there have been numerous attempts to couple these two approaches either by patching RANS and LES calculations together (zonal methods) or by blending the two sets of equations. In order to create a proper bridging model, that is, a single set of equations which captures both RANS and LES like behavior, it is necessary to place both RANS and LES in a more general framework. The goal of the current work is threefold: to provide such a framework, to demonstrate how the Flow Simulation Methodology (FSM) fits into this framework, and to evaluate the strengths and weaknesses of the current version of the FSM. To do this, first a set of filtered Navier-Stokes (FNS) equations are introduced in terms of an arbitrary generalized filter. Additional exact equations are given for the second order moments and the generalized subfilter dissipation rate tensor. This is followed by a discussion of the role of implicit and explicit filters in turbulence modeling. The FSM is then described with particular attention to its role as a bridging model. In order to evaluate the method a specific implementation of the FSM approach is proposed. Simulations are presented using this model for the case of a separating flow over a "hump" with and without flow control. Careful attention is paid to error estimation, and, in particular, how using flow statistics and time series affects the error analysis. Both mean flow and Reynolds stress profiles are presented, as well as the phase averaged turbulent structures and wall pressure spectra. Using the phase averaged data it is possible to examine how the FSM partitions the energy between the coherent resolved scale motions, the random resolved scale fluctuations, and the subfilter
Simulation of IRIS 2010 missile experiments for validation of integral simulation approach
International Nuclear Information System (INIS)
Siefert, Alexander; Henkel, Fritz-Otto
2013-01-01
Conclusion: Used material model and model approach shows acceptable results in comparison with test data, but further improvements are possible. Tri-axial Test: The material model must be improved to capture the higher strain values for test with confining pressure. Possible solution: Defining separate damage curves for different confining pressures. Flexural Test: Model approach has to be approved regarding the swing back phase. Possible first step: Investigation of crack closing –tensional recovery. Punching Test: Challenge for this simulation is the element erosions. Solution: Defining a reliable deletion criteria is possible by averaging several case studies. Alternative is the application of SPH-method. In General: Material properties showed differences to code definitions. Therefore a required input for detailed analysis of local damage are test data (especially for existing structures). Microscopic cracking can’t be investigated using a homogenous material
Energy Technology Data Exchange (ETDEWEB)
Kaldvee, K.; Nefedova, A.V. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Fedorenko, S.G. [Voevodsky Institute of Chemical Kinetics and Combustion SB RAS, Novosibirsk 630090 (Russian Federation); Vanetsev, A.S. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); Orlovskaya, E.O. [Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); Puust, L.; Pärs, M.; Sildos, I. [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Ryabova, A.V. [Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation); National Research Nuclear University Moscow Engineering Physics Institute, Kashirskoe Highway, 31, Moscow 115409 (Russian Federation); Orlovskii, Yu.V., E-mail: orlovski@Lst.gpi.ru [Institute of Physics, University of Tartu, W. Ostwaldi st. 1, Tartu 50411 (Estonia); Prokhorov General Physics Institute RAS, Vavilov st. 38, Moscow 119991 (Russian Federation)
2017-03-15
The fluorescence kinetics and spectral intensity ratio (FIR) methods for contactless optical temperature measurement in the NIR spectral range with Nd{sup 3+} doped YAG micro- and YPO{sub 4} nanocrystals are considered and the problems are revealed. The requirements for good temperature RE doped crystalline nanoparticles sensor are formulated.
A novel approach to multihazard modeling and simulation.
Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R
2009-06-01
To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.
Using Intelligent System Approaches for Simulation of Electricity Markets
Hamagami, Tomoki
Significances and approaches of applying intelligent systems to artificial electricity market is discussed. In recent years, with the moving into restructuring of electric system in Japan, the deregulation for the electric market is progressing. The most major change of the market is a founding of JEPX (Japan Electric Power eXchange.) which is expected to help lower power bills through effective use of surplus electricity. The electricity market designates exchange of electric power between electric power suppliers (supplier agents) themselves. In the market, the goal of each supplier agents is to maximize its revenue for the entire trading period, and shows complex behavior, which can model by a multiagent platform. Using the multiagent simulations which have been studied as “artificial market" helps to predict the spot prices, to plan investments, and to discuss the rules of market. Moreover, intelligent system approaches provide for constructing more reasonable policies of each agents. This article, first, makes a brief summary of the electricity market in Japan and the studies of artificial markets. Then, a survey of tipical studies of artificial electricity market is listed. Through these topics, the future vision is presented for the studies.
Optimal Subinterval Selection Approach for Power System Transient Stability Simulation
Directory of Open Access Journals (Sweden)
Soobae Kim
2015-10-01
Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Residents’ perceptions of simulation as a clinical learning approach
Directory of Open Access Journals (Sweden)
Catharine M Walsh
2017-02-01
Results: Residents’ perceptions of simulation included: 1 simulation serves pragmatic purposes; 2 simulation provides a safe space; 3 simulation presents perils and pitfalls; and 4 optimal design for simulation: integration and tension. Key findings included residents’ markedly narrow perception of simulation’s capacity to support non-technical skills development or its use beyond introductory learning. Conclusion: Trainees’ learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees’ a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation.
Biomass gasification systems for residential application: An integrated simulation approach
International Nuclear Information System (INIS)
Prando, Dario; Patuzzi, Francesco; Pernigotto, Giovanni; Gasparella, Andrea; Baratieri, Marco
2014-01-01
The energy policy of the European member States is promoting high-efficiency cogeneration systems by means of the European directive 2012/27/EU. Particular facilitations have been implemented for the small-scale and micro-cogeneration units. Furthermore, the directive 2010/31/EU promotes the improvement of energy performance of buildings and use of energy from renewable sources for the building sector. In this scenario, systems based on gasification are considered a promising technological solution when dealing with biomass and small scale systems. In this paper, an integrated approach has been implemented to assess the energy performance of combined heat and power (CHP) systems based on biomass gasification and installed in residential blocks. The space-heating loads of the considered building configurations have been simulated by means of EnergyPlus. The heat load for domestic hot water demand has been calculated according to the average daily profiles suggested by the Italian and European technical standards. The efficiency of the whole CHP system has been evaluated supplementing the simulation of the gasification stage with the energy balance of the cogeneration set (i.e., internal combustion engine) and implementing the developed routines in the Matlab-Simulink environment. The developed model has been used to evaluate the primary energy saving (PES) of the CHP system compared to a reference case of separate production of heat and power. Economic analyses are performed either with or without subsidizations for the generated electricity. The results highlight the capability of the integrated approach to estimate both energy and economic performances of CHP systems applied to the residential context. Furthermore, the importance of the generated heat valorisation and the proper system sizing have been discussed. - Highlights: • CHP system based on biomass gasification to meet household energy demand is studied. • Influence of CHP size and operation time on
Parallel discrete event simulation: A shared memory approach
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
Energy requirements during sponge cake baking: Experimental and simulated approach
International Nuclear Information System (INIS)
Ureta, M. Micaela; Goñi, Sandro M.; Salvadori, Viviana O.; Olivera, Daniela F.
2017-01-01
Highlights: • Sponge cake energy consumption during baking was studied. • High oven temperature and forced convection mode favours oven energy savings. • Forced convection produced higher weight loss thus a higher product energy demand. • Product energy demand was satisfactorily estimated by the baking model applied. • The greatest energy efficiency corresponded to the forced convection mode. - Abstract: Baking is a high energy demanding process, which requires special attention in order to know and improve its efficiency. In this work, energy consumption associated to sponge cake baking is investigated. A wide range of operative conditions (two ovens, three convection modes, three oven temperatures) were compared. Experimental oven energy consumption was estimated taking into account the heating resistances power and a usage factor. Product energy demand was estimated from both experimental and modeling approaches considering sensible and latent heat. Oven energy consumption results showed that high oven temperature and forced convection mode favours energy savings. Regarding product energy demand, forced convection produced faster and higher weight loss inducing a higher energy demand. Besides, this parameter was satisfactorily estimated by the baking model applied, with an average error between experimental and simulated values in a range of 8.0–10.1%. Finally, the energy efficiency results indicated that it increased linearly with the effective oven temperature and that the greatest efficiency corresponded to the forced convection mode.
Striving for Better Medical Education: the Simulation Approach.
Sakakushev, Boris E; Marinov, Blagoi I; Stefanova, Penka P; Kostianev, Stefan St; Georgiou, Evangelos K
2017-06-01
Medical simulation is a rapidly expanding area within medical education due to advances in technology, significant reduction in training hours and increased procedural complexity. Simulation training aims to enhance patient safety through improved technical competency and eliminating human factors in a risk free environment. It is particularly applicable to a practical, procedure-orientated specialties. Simulation can be useful for novice trainees, experienced clinicians (e.g. for revalidation) and team building. It has become a cornerstone in the delivery of medical education, being a paradigm shift in how doctors are educated and trained. Simulation must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and should not depend on the simulation platforms used. Conversely, ingraining of poor practice may occur in the absence of adequate supervision, and equipment malfunction during the simulation can break the immersion and disrupt any learning that has occurred. Despite the presence of high technology, there is a substantial learning curve for both learners and facilitators. The technology of simulation continues to advance, offering devices capable of improved fidelity in virtual reality simulation, more sophisticated procedural practice and advanced patient simulators. Simulation-based training has also brought about paradigm shifts in the medical and surgical education arenas and ensured that the scope and impact of simulation will continue to broaden.
International Nuclear Information System (INIS)
Rompotis, Dimitrios
2016-02-01
In this work, a single-shot temporal metrology scheme operating in the vacuum-extreme ultraviolet spectral range has been designed and experimentally implemented. Utilizing an anti-collinear geometry, a second-order intensity autocorrelation measurement of a vacuum ultraviolet pulse can be performed by encoding temporal delay information on the beam propagation coordinate. An ion-imaging time-of-flight spectrometer, offering micrometer resolution has been set-up for this purpose. This instrument enables the detection of a magnified image of the spatial distribution of ions exclusively generated by direct two-photon absorption in the combined counter-propagating pulse focus and thus obtain the second-order intensity autocorrelation measurement on a single-shot basis. Additionally, an intense VUV light source based on high-harmonic generation has been experimentally realized. It delivers intense sub-20 fs Ti:Sa fifth-harmonic pulses utilizing a loose-focusing geometry in a long Ar gas cell. The VUV pulses centered at 161.8 nm reach pulse energies of 1.1 μJ per pulse, while the corresponding pulse duration is measured with a second-order, fringe-resolved autocorrelation scheme to be 18 ± 1 fs on average. Non-resonant, two-photon ionization of Kr and Xe and three-photon ionization of Ne verify the fifth-harmonic pulse intensity and indicate the feasibility of multi-photon VUV pump/VUV probe studies of ultrafast atomic and molecular dynamics. Finally, the extended functionally of the counter-propagating pulse metrology approach is demonstrated by a single-shot VUV pump/VUV probe experiment aiming at the investigation of ultrafast dissociation dynamics of O 2 excited in the Schumann-Runge continuum at 162 nm.
Cally, Paul S.; Xiong, Ming
2018-01-01
Fast sausage modes in solar magnetic coronal loops are only fully contained in unrealistically short dense loops. Otherwise they are leaky, losing energy to their surrounds as outgoing waves. This causes any oscillation to decay exponentially in time. Simultaneous observations of both period and decay rate therefore reveal the eigenfrequency of the observed mode, and potentially insight into the tubes’ nonuniform internal structure. In this article, a global spectral description of the oscillations is presented that results in an implicit matrix eigenvalue equation where the eigenvalues are associated predominantly with the diagonal terms of the matrix. The off-diagonal terms vanish identically if the tube is uniform. A linearized perturbation approach, applied with respect to a uniform reference model, is developed that makes the eigenvalues explicit. The implicit eigenvalue problem is easily solved numerically though, and it is shown that knowledge of the real and imaginary parts of the eigenfrequency is sufficient to determine the width and density contrast of a boundary layer over which the tubes’ enhanced internal densities drop to ambient values. Linearized density kernels are developed that show sensitivity only to the extreme outside of the loops for radial fundamental modes, especially for small density enhancements, with no sensitivity to the core. Higher radial harmonics do show some internal sensitivity, but these will be more difficult to observe. Only kink modes are sensitive to the tube centres. Variation in internal and external Alfvén speed along the loop is shown to have little effect on the fundamental dimensionless eigenfrequency, though the associated eigenfunction becomes more compact at the loop apex as stratification increases, or may even displace from the apex.
An intelligent dynamic simulation environment: An object-oriented approach
International Nuclear Information System (INIS)
Robinson, J.T.; Kisner, R.A.
1988-01-01
This paper presents a prototype simulation environment for nuclear power plants which illustrates the application of object-oriented programming to process simulation. Systems are modeled using this technique as a collection of objects which communicate via message passing. The environment allows users to build simulation models by selecting iconic representations of plant components from a menu and connecting them with the aid of a mouse. Models can be modified graphically at any time, even as the simulation is running, and the results observed immediately via real-time graphics. This prototype illustrates the use of object-oriented programming to create a highly interactive and automated simulation environment. 9 refs., 4 figs
Directory of Open Access Journals (Sweden)
Brian Johnson
2015-01-01
Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.
Directory of Open Access Journals (Sweden)
Kwadwo S. Agyepong
2013-01-01
Full Text Available Time-course expression profiles and methods for spectrum analysis have been applied for detecting transcriptional periodicities, which are valuable patterns to unravel genes associated with cell cycle and circadian rhythm regulation. However, most of the proposed methods suffer from restrictions and large false positives to a certain extent. Additionally, in some experiments, arbitrarily irregular sampling times as well as the presence of high noise and small sample sizes make accurate detection a challenging task. A novel scheme for detecting periodicities in time-course expression data is proposed, in which a real-valued iterative adaptive approach (RIAA, originally proposed for signal processing, is applied for periodogram estimation. The inferred spectrum is then analyzed using Fisher’s hypothesis test. With a proper -value threshold, periodic genes can be detected. A periodic signal, two nonperiodic signals, and four sampling strategies were considered in the simulations, including both bursts and drops. In addition, two yeast real datasets were applied for validation. The simulations and real data analysis reveal that RIAA can perform competitively with the existing algorithms. The advantage of RIAA is manifested when the expression data are highly irregularly sampled, and when the number of cycles covered by the sampling time points is very reduced.
Quantum Mechanical Balance Equation Approach to Semiconductor Device Simulation
National Research Council Canada - National Science Library
Cui, Long
1997-01-01
This research project was focused on the development of a quantum mechanical balance equation based device simulator that can model advanced, compound, submicron devices, under all transport conditions...
Energy Technology Data Exchange (ETDEWEB)
Chen, C. D.; Kemp, A. J.; Pérez, F.; Link, A.; Key, M. H.; McLean, H.; Ping, Y.; Patel, P. K. [Lawrence Livermore National Laboratory (United States); Beg, F. N.; Chawla, S.; Sorokovikova, A.; Westover, B. [University of California, San Diego (United States); Morace, A. [University of Milan (Italy); Stephens, R. B. [General Atomics (United States); Streeter, M. [Imperial College London (United Kingdom)
2013-05-15
A 2-D multi-stage simulation model incorporating realistic laser conditions and a fully resolved electron distribution handoff has been developed and compared to angularly and spectrally resolved Bremsstrahlung measurements from high-Z planar targets. For near-normal incidence and 0.5-1 × 10{sup 20} W/cm{sup 2} intensity, particle-in-cell (PIC) simulations predict the existence of a high energy electron component consistently directed away from the laser axis, in contrast with previous expectations for oblique irradiation. Measurements of the angular distribution are consistent with a high energy component when directed along the PIC predicted direction, as opposed to between the target normal and laser axis as previously measured.
Rectangular spectral collocation
Driscoll, Tobin A.; Hale, Nicholas
2015-01-01
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon
Rosnik, Andreana M; Curutchet, Carles
2015-12-08
Over the past decade, both experimentalists and theorists have worked to develop methods to describe pigment-protein coupling in photosynthetic light-harvesting complexes in order to understand the molecular basis of quantum coherence effects observed in photosynthesis. Here we present an improved strategy based on the combination of quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations and excited-state calculations to predict the spectral density of electronic-vibrational coupling. We study the water-soluble chlorophyll-binding protein (WSCP) reconstituted with Chl a or Chl b pigments as the system of interest and compare our work with data obtained by Pieper and co-workers from differential fluorescence line-narrowing spectra (Pieper et al. J. Phys. Chem. B 2011, 115 (14), 4042-4052). Our results demonstrate that the use of QM/MM MD simulations where the nuclear positions are still propagated at the classical level leads to a striking improvement of the predicted spectral densities in the middle- and high-frequency regions, where they nearly reach quantitative accuracy. This demonstrates that the so-called "geometry mismatch" problem related to the use of low-quality structures in QM calculations, not the quantum features of pigments high-frequency motions, causes the failure of previous studies relying on similar protocols. Thus, this work paves the way toward quantitative predictions of pigment-protein coupling and the comprehension of quantum coherence effects in photosynthesis.
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Sensing the Sentence: An Embodied Simulation Approach to Rhetorical Grammar
Rule, Hannah J.
2017-01-01
This article applies the neuroscientific concept of embodied simulation--the process of understanding language through visual, motor, and spatial modalities of the body--to rhetorical grammar and sentence-style pedagogies. Embodied simulation invigorates rhetorical grammar instruction by attuning writers to the felt effects of written language,…
Estimation of spectral kurtosis
Sutawanir
2017-03-01
Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to
Nguyen, Vu-Hieu; Naili, Salah
2012-08-01
This paper deals with the modeling of guided waves propagation in in vivo cortical long bone, which is known to be anisotropic medium with functionally graded porosity. The bone is modeled as an anisotropic poroelastic material by using Biot's theory formulated in high frequency domain. A hybrid spectral/finite element formulation has been developed to find the time-domain solution of ultrasonic waves propagating in a poroelastic plate immersed in two fluid halfspaces. The numerical technique is based on a combined Laplace-Fourier transform, which allows to obtain a reduced dimension problem in the frequency-wavenumber domain. In the spectral domain, as radiation conditions representing infinite fluid halfspaces may be exactly introduced, only the heterogeneous solid layer needs to be analyzed by using finite element method. Several numerical tests are presented showing very good performance of the proposed procedure. A preliminary study on the first arrived signal velocities computed by using equivalent elastic and poroelastic models will be presented. Copyright © 2012 John Wiley & Sons, Ltd.
Simulated annealing approach for solving economic load dispatch ...
African Journals Online (AJOL)
user
thermodynamics to solve economic load dispatch (ELD) problems. ... evolutionary programming algorithm has been successfully applied for solving the ... concept behind the simulated annealing (SA) optimization is discussed in Section 3.
A Hands-on Approach to Evolutionary Simulation
DEFF Research Database (Denmark)
Valente, Marco; Andersen, Esben Sloth
2002-01-01
in an industry (or an economy). To abbreviate we call such models NelWin models. The new system for the programming and simulation of such models is called the Laboratory for simulation development - abbreviated as Lsd. The paper is meant to allow readers to use the Lsd version of a basic NelWin model: observe...... the model content, run the simulation, interpret the results, modify the parameterisation, etc. Since the paper deals with the implementation of a fairly complex set of models in a fairly complex programming and simulation system, it does not contain full documentation of NelWin and Lsd. Instead we hope...... to give the reader a first introduction to NelWin and Lsd and inspire a further exploration of them....
Zitzelsberger, Hilde; Coffey, Sue; Graham, Leslie; Papaconstantinou, Efrosini; Anyinam, Charles
2017-01-01
Simulation-based learning (SBL) is rapidly becoming one of the most significant teaching-learning-evaluation strategies available in undergraduate nursing education. While there is indication within the literature and anecdotally about the benefits of simulation, abundant and strong evidence that supports the effectiveness of simulation for…
Simulation and analysis of Au-MgF2 structure in plasmonic sensor in near infrared spectral region
Sharma, Anuj K.
2018-05-01
Plasmonic sensor based on metal-dielectric combination of gold and MgF2 layers is studied in near infrared (NIR) spectral region. An emphasis is given on the effect of variable thickness of MgF2 layer in combination with operating wavelength and gold layer thickness on the sensor's performance in NIR. It is established that the variation in MgF2 thickness in connection with plasmon penetration depth leads to significant variation in sensor's performance. The analysis leads to a conclusion that taking smaller values of MgF2 layer thickness and operating at longer NIR wavelength leads to enhanced sensing performance. Also, fluoride glass can provide better sensing performance than chalcogenide glass and silicon substrate.
Energy Technology Data Exchange (ETDEWEB)
Li, Mao; Qiu, Zihua; Liang, Chunlei; Sprague, Michael; Xu, Min
2017-01-13
In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method is stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.
Simulation of electron spin resonance spectroscopy in diverse environments: An integrated approach
Zerbetto, Mirco; Polimeno, Antonino; Barone, Vincenzo
2009-12-01
We discuss in this work a new software tool, named E-SpiReS (Electron Spin Resonance Simulations), aimed at the interpretation of dynamical properties of molecules in fluids from electron spin resonance (ESR) measurements. The code implements an integrated computational approach (ICA) for the calculation of relevant molecular properties that are needed in order to obtain spectral lines. The protocol encompasses information from atomistic level (quantum mechanical) to coarse grained level (hydrodynamical), and evaluates ESR spectra for rigid or flexible single or multi-labeled paramagnetic molecules in isotropic and ordered phases, based on a numerical solution of a stochastic Liouville equation. E-SpiReS automatically interfaces all the computational methodologies scheduled in the ICA in a way completely transparent for the user, who controls the whole calculation flow via a graphical interface. Parallelized algorithms are employed in order to allow running on calculation clusters, and a web applet Java has been developed with which it is possible to work from any operating system, avoiding the problems of recompilation. E-SpiReS has been used in the study of a number of different systems and two relevant cases are reported to underline the promising applicability of the ICA to complex systems and the importance of similar software tools in handling a laborious protocol. Program summaryProgram title: E-SpiReS Catalogue identifier: AEEM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.0 No. of lines in distributed program, including test data, etc.: 311 761 No. of bytes in distributed program, including test data, etc.: 10 039 531 Distribution format: tar.gz Programming language: C (core programs) and Java (graphical interface) Computer: PC and Macintosh Operating system: Unix and Windows Has the code been vectorized or
Levett-Jones, Tracy; Andersen, Patrea; Reid-Searl, Kerry; Guinea, Stephen; McAllister, Margaret; Lapkin, Samuel; Palmer, Lorinda; Niddrie, Marian
2015-09-01
Active participation in immersive simulation experiences can result in technical and non-technical skill enhancement. However, when simulations are conducted in large groups, maintaining the interest of observers so that they do not disengage from the learning experience can be challenging. We implemented Tag Team Simulation with the aim of ensuring that both participants and observers had active and integral roles in the simulation. In this paper we outline the features of this innovative approach and provide an example of its application to a pain simulation. Evaluation was conducted using the Satisfaction with Simulation Experience Scale. A total of 444 year nursing students participated from a population of 536 (response rate 83%). Cronbach's alpha for the Scale was .94 indicating high internal consistency. The mean satisfaction score for participants was 4.63 compared to 4.56 for observers. An independent sample t test revealed no significant difference between these scores (t (300) = -1.414, p = 0.16). Tag team simulation is an effective approach for ensuring observers' and participants' active involvement during group-based simulations and one that is highly regarded by students. It has the potential for broad applicability across a range of leaning domains both within and beyond nursing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Object-oriented approach for gas turbine engine simulation
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
Energy Technology Data Exchange (ETDEWEB)
Xu, X.P. [Candu Energy Inc, Mississauga, Ontario (Canada)
2012-07-01
This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)
International Nuclear Information System (INIS)
Xu, X.P.
2012-01-01
This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)
An Experimental Approach to Simulations of the CLIC Interaction Point
DEFF Research Database (Denmark)
Esberg, Jakob
2012-01-01
with respect to the luminosity weighted depolarization is discussed. In the chapter on muons, the implementation of the production of incoherent muons in GUINEA-PIG++ will be discussed. Comments on the correctness and completeness of the implementation of muon production will be presented. The chapter...... experiments conducted at MAMI will be presented. Furthermore the chapter discusses the performance of new CMOS based detectors to be used in future experiments by the NA63 collaboration. The chapter on collider simulations introduces the beam-beam simulation codes GUINEA-PIG and GUINEA-PIG++, their methods...... of operation and their features. The characteristics of the simulated particles are presented and a comparison between the outputs of these codes with those from CAIN. \\item In the chapter on tridents, the implementation of the direct trident process in GUINEA-PIG++ is described. The results are compared...
DSNP: a new approach to simulate nuclear power plants
International Nuclear Information System (INIS)
Saphier, D.
1977-01-01
The DSNP (Dynamic Simulator for Nuclear Power-plants) is a special purpose block oriented simulation language. It provides for simulations of a large variety of nuclear power plants or various parts of the power plant in a simple straightforward manner. The system is composed of five basic elements, namely, the DSNP language, the precompiler-or the DSNP language translator, the components library, the document generator, and the system data files. The DSNP library of modules includes the selfcontained models of components or physical processes found in a nuclear power plant, and various auxiliary modules such as material properties, control modules, integration schemes, various basic transfer functions etc. In its final form DSNP will have four libraries
Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli
2018-01-01
Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.
Teimoorinia, H.
2012-12-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 MUSIC catalog covering bands between ~0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ~1200-7500 Å. Then a series of neural networks are trained, with average performance ~90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
Fast 2D Simulation of Superconductors: a Multiscale Approach
DEFF Research Database (Denmark)
Rodriguez Zermeno, Victor Manuel; Sørensen, Mads Peter; Pedersen, Niels Falsig
2009-01-01
This work presents a method to calculate AC losses in thin conductors such as the commercially available second generation superconducting wires through a multiscale meshing technique. The main idea is to use large aspect ratio elements to accurately simulate thin material layers. For a single thin...
Approaching a reliable process simulation for the virtual product development
Kose, K.; Rietman, Bert; Tikhomirov, D.; Bessert, N.
2005-01-01
In this paper an outline for a strategy to include manufacturing effects in subsequent simulations for the virtual product development from an industrial point of view is given. Especially the conditions for a successful mapping of geometry and results between different applications are discussed.
Simulation Approach for Timing Analysis of Genetic Logic Circuits
DEFF Research Database (Denmark)
Baig, Hasan; Madsen, Jan
2017-01-01
in a manner similar to electronic logic circuits, but they are much more stochastic and hence much harder to characterize. In this article, we introduce an approach to analyze the threshold value and timing of genetic logic circuits. We show how this approach can be used to analyze the timing behavior...... of single and cascaded genetic logic circuits. We further analyze the timing sensitivity of circuits by varying the degradation rates and concentrations. Our approach can be used not only to characterize the timing behavior but also to analyze the timing constraints of cascaded genetic logic circuits...
Practice-oriented optical thin film growth simulation via multiple scale approach
Energy Technology Data Exchange (ETDEWEB)
Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)
2015-10-01
Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.
Nasouri, Babak; Murphy, Thomas E.; Berberoglu, Halil
2014-07-01
For understanding the mechanisms of low-level laser/light therapy (LLLT), accurate knowledge of light interaction with tissue is necessary. We present a three-dimensional, multilayer reduced-variance Monte Carlo simulation tool for studying light penetration and absorption in human skin. Local profiles of light penetration and volumetric absorption were calculated for uniform as well as Gaussian profile beams with different spreads over the spectral range from 1000 to 1900 nm. The results showed that lasers within this wavelength range could be used to effectively and safely deliver energy to specific skin layers as well as achieve large penetration depths for treating deep tissues, without causing skin damage. In addition, by changing the beam profile from uniform to Gaussian, the local volumetric dosage could increase as much as three times for otherwise similar lasers. We expect that this tool along with the results presented will aid researchers in selecting wavelength and laser power in LLLT.
Energy Technology Data Exchange (ETDEWEB)
Tran, H., E-mail: ha.tran@lisa.u-pec.fr [Laboratoire Interuniversitaire des Systèmes Atmosphériques, UMR CNRS 7583, Université Paris Est Créteil, Université Paris Diderot, Institut Pierre-Simon Laplace, 94010 Créteil Cedex (France); Domenech, J.-L. [Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas, (IEM-CSIC), Serrano 123, 28006 Madrid (Spain)
2014-08-14
Spectral shapes of isolated lines of HCl perturbed by Ar are investigated for the first time using classical molecular dynamics simulations (CMDS). Using reliable intermolecular potentials taken from the literature, these CMDS provide the time evolution of the auto-correlation function of the dipole moment, whose Fourier-Laplace transform leads to the absorption spectrum. In order to test these calculations, room temperature spectra of various lines in the fundamental band of HCl diluted in Ar are measured, in a large pressure range, with a difference-frequency laser spectrometer. Comparisons between measured and calculated spectra show that the CMDS are able to predict the large Dicke narrowing effect on the shape of HCl lines and to satisfactorily reproduce the shapes of HCl spectra at different pressures and for various rotational quantum numbers.
Directory of Open Access Journals (Sweden)
Abdul Latif Memon
2014-01-01
Full Text Available Many encoding schemes are used in OCDMA (Optical Code Division Multiple Access Network but SAC (Spectral Amplitude Codes is widely used. It is considered an effective arrangement to eliminate dominant noise called MAI (Multi Access Interference. Various codes are studied for evaluation with respect to their performance against three noises namely shot noise, thermal noise and PIIN (Phase Induced Intensity Noise. Various Mathematical models for SNR (Signal to Noise Ratios and BER (Bit Error Rates are discussed where the SNRs are calculated and BERs are computed using Gaussian distribution assumption. After analyzing the results mathematically, it is concluded that ZCC (Zero Cross Correlation Code performs better than the other selected SAC codes and can serve larger number of active users than the other codes do. At various receiver power levels, analysis points out that RDC (Random Diagonal Code also performs better than the other codes. For the power interval between -10 and -20 dBm performance of RDC is better ZCC. Their lowest BER values suggest that these codes should be part of an efficient and cost effective OCDM access network in the future.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K.
2012-01-01
We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system's size. © 2006 IEEE.
Klingbeil, Guido
2012-02-01
We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.
A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios
Directory of Open Access Journals (Sweden)
Hiroshi Shiraishi
2012-01-01
Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.
A new approach to flow simulation using hybrid models
Solgi, Abazar; Zarei, Heidar; Nourani, Vahid; Bahmani, Ramin
2017-11-01
The necessity of flow prediction in rivers, for proper management of water resource, and the need for determining the inflow to the dam reservoir, designing efficient flood warning systems and so forth, have always led water researchers to think about models with high-speed response and low error. In the recent years, the development of Artificial Neural Networks and Wavelet theory and using the combination of models help researchers to estimate the river flow better and better. In this study, daily and monthly scales were used for simulating the flow of Gamasiyab River, Nahavand, Iran. The first simulation was done using two types of ANN and ANFIS models. Then, using wavelet theory and decomposing input signals of the used parameters, sub-signals were obtained and were fed into the ANN and ANFIS to obtain hybrid models of WANN and WANFIS. In this study, in addition to the parameters of precipitation and flow, parameters of temperature and evaporation were used to analyze their effects on the simulation. The results showed that using wavelet transform improved the performance of the models in both monthly and daily scale. However, it had a better effect on the monthly scale and the WANFIS was the best model.
Simulation approach to coincidence summing in {gamma}-ray spectrometry
Energy Technology Data Exchange (ETDEWEB)
Dziri, S., E-mail: samir.dziri@iphc.cnrs.fr [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France); Nourreddine, A.; Sellam, A.; Pape, A.; Baussan, E. [Groupe RaMsEs, Institut Pluridisciplinaire Hubert Curien (IPHC), University of Strasbourg, CNRS, IN2P3, UMR 7178, 23 rue de Loess, BP 28, 67037 Strasbourg Cedex 2 (France)
2012-07-15
Some of the radionuclides used for efficiency calibration of a HPGe spectrometer are subject to coincidence-summing (CS) and account must be taken of the phenomenon to obtain quantitative results when counting samples to determine their activity. We have used MCNPX simulations, which do not take CS into account, to obtain {gamma}-ray peak intensities that were compared to those observed experimentally. The loss or gain of a measured peak intensity relative to the simulated peak is attributed to CS. CS correction factors are compared with those of ETNA and GESPECOR. Application to a test sample prepared with known radionuclides gave values close to the published activities. - Highlights: Black-Right-Pointing-Pointer Coincidence summing occurs when the solid angle is increased. Black-Right-Pointing-Pointer The loss of counts gives rise to an approximative efficiency curves, this means a wrong quantitative data. Black-Right-Pointing-Pointer To overcome this problem we need mono-energetic source, otherwise, the MCNPX simulation allows by comparison with the experiment data to get the coincidence summing correction factors. Black-Right-Pointing-Pointer By multiplying these factors by the approximative efficiency, we obtain the accurate efficiency.
A Simulational approach to teaching statistical mechanics and kinetic theory
International Nuclear Information System (INIS)
Karabulut, H.
2005-01-01
A computer simulation demonstrating how Maxwell-Boltzmann distribution is reached in gases from a nonequilibrium distribution is presented. The algorithm can be generalized to the cases of gas particles (atoms or molecules) with internal degrees of freedom such as electronic excitations and vibrational-rotational energy levels. Another generalization of the algorithm is the case of mixture of two different gases. By choosing the collision cross sections properly one can create quasi equilibrium distributions. For example by choosing same atom cross sections large and different atom cross sections very small one can create mixture of two gases with different temperatures where two gases slowly interact and come to equilibrium in a long time. Similarly, for the case one kind of atom with internal degrees of freedom one can create situations that internal degrees of freedom come to the equilibrium much later than translational degrees of freedom. In all these cases the equilibrium distribution that the algorithm gives is the same as expected from the statistical mechanics. The algorithm can also be extended to cover the case of chemical equilibrium where species A and B react to form AB molecules. The laws of chemical equilibrium can be observed from this simulation. The chemical equilibrium simulation can also help to teach the elusive concept of chemical potential
Energy Technology Data Exchange (ETDEWEB)
Teimoorinia, H., E-mail: hteimoo@uvic.ca [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, V8P 1A1 (Canada)
2012-12-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between {approx}0.4 and 24 {mu}m in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval {approx}1200-7500 A. Then a series of neural networks are trained, with average performance {approx}90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
International Nuclear Information System (INIS)
Teimoorinia, H.
2012-01-01
The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between ∼0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ∼1200-7500 Å. Then a series of neural networks are trained, with average performance ∼90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.
Lee, Kyoung O; Holmes, Thomas W; Calderon, Adan F; Gardner, Robin P
2012-05-01
Using a Monte Carlo (MC) simulation, random walks were used for pebble tracking in a two-dimensional geometry in the presence of a biased gravity field. We investigated the effect of viscosity damping in the presence of random Gaussian fluctuations. The particle tracks were generated by Molecular Dynamics (MD) simulation for a Pebble Bed Reactor. The MD simulations were conducted in the interaction of noncohesive Hertz-Mindlin theory where the random walk MC simulation has a correlation with the MD simulation. This treatment can easily be extended to include the generation of transient gamma-ray spectra from a single pebble that contains a radioactive tracer. Then the inverse analysis thereof could be made to determine the uncertainty of the realistic measurement of transient positions of that pebble by any given radiation detection system designed for that purpose. Copyright Â© 2011 Elsevier Ltd. All rights reserved.
Delange, Pascal; Backes, Steffen; van Roekeghem, Ambroise; Pourovskii, Leonid; Jiang, Hong; Biermann, Silke
2018-04-01
The most intriguing properties of emergent materials are typically consequences of highly correlated quantum states of their electronic degrees of freedom. Describing those materials from first principles remains a challenge for modern condensed matter theory. Here, we review, apply and discuss novel approaches to spectral properties of correlated electron materials, assessing current day predictive capabilities of electronic structure calculations. In particular, we focus on the recent Screened Exchange Dynamical Mean-Field Theory scheme and its relation to generalized Kohn-Sham Theory. These concepts are illustrated on the transition metal pnictide BaCo2As2 and elemental zinc and cadmium.
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
International Nuclear Information System (INIS)
Muroga, Takeo
1990-01-01
The free defect survival ratio is calculated by ''cascade-annealing'' computer simulation using the MARLOWE and modified DAIQUIRI codes in various cases of Primary Knock-on Atom (PKA) spectra. The number of subcascades is calculated by ''cut-off'' calculation using MARLOWE. The adequacy of these methods is checked by comparing the results with experiments (surface segregation measurements and Transmission Electron Microscope cascade defect observations). The correlation using the weighted average recoil energy as a parameter shows that the saturation of the free defect survival ratio at high PKA energies has a close relation to the cascade splitting into subcascades. (author)
Czech Academy of Sciences Publication Activity Database
Kaminský, Jakub; Buděšínský, Miloš; Taubert, S.; Bouř, Petr; Straka, Michal
2013-01-01
Roč. 15, č. 23 (2013), s. 9223-9230 ISSN 1463-9076 R&D Projects: GA ČR GA13-03978S; GA ČR GPP208/10/P356; GA ČR GAP208/11/0105; GA MŠk(CZ) LH11033; GA ČR GA203/09/2037 Grant - others:AV ČR(CZ) M200551205 Institutional support: RVO:61388963 Keywords : fullerene * NMR * simulations * DFT Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.198, year: 2013
Herrmann, Christoph; Engel, Klaus-Jürgen; Wiegert, Jens
2010-12-21
The most obvious problem in obtaining spectral information with energy-resolving photon counting detectors in clinical computed tomography (CT) is the huge x-ray flux present in conventional CT systems. At high tube voltages (e.g. 140 kVp), despite the beam shaper, this flux can be close to 10⁹ Mcps mm⁻² in the direct beam or in regions behind the object, which are close to the direct beam. Without accepting the drawbacks of truncated reconstruction, i.e. estimating missing direct-beam projection data, a photon-counting energy-resolving detector has to be able to deal with such high count rates. Sub-structuring pixels into sub-pixels is not enough to reduce the count rate per pixel to values that today's direct converting Cd[Zn]Te material can cope with (≤ 10 Mcps in an optimistic view). Below 300 µm pixel pitch, x-ray cross-talk (Compton scatter and K-escape) and the effect of charge diffusion between pixels are problematic. By organising the detector in several different layers, the count rate can be further reduced. However this alone does not limit the count rates to the required level, since the high stopping power of the material becomes a disadvantage in the layered approach: a simple absorption calculation for 300 µm pixel pitch shows that the required layer thickness of below 10 Mcps/pixel for the top layers in the direct beam is significantly below 100 µm. In a horizontal multi-layer detector, such thin layers are very difficult to manufacture due to the brittleness of Cd[Zn]Te. In a vertical configuration (also called edge-on illumination (Ludqvist et al 2001 IEEE Trans. Nucl. Sci. 48 1530-6, Roessl et al 2008 IEEE NSS-MIC-RTSD 2008, Conf. Rec. Talk NM2-3)), bonding of the readout electronics (with pixel pitches below 100 µm) is not straightforward although it has already been done successfully (Pellegrini et al 2004 IEEE NSS MIC 2004 pp 2104-9). Obviously, for the top detector layers, materials with lower stopping power would be advantageous
Nagaso, Masaru; Komatitsch, Dimitri; Moysan, Joseph; Lhuillier, Christian
2018-01-01
ASTRID project, French sodium cooled nuclear reactor of 4th generation, is under development at the moment by Alternative Energies and Atomic Energy Commission (CEA). In this project, development of monitoring techniques for a nuclear reactor during operation are identified as a measure issue for enlarging the plant safety. Use of ultrasonic measurement techniques (e.g. thermometry, visualization of internal objects) are regarded as powerful inspection tools of sodium cooled fast reactors (SFR) including ASTRID due to opacity of liquid sodium. In side of a sodium cooling circuit, heterogeneity of medium occurs because of complex flow state especially in its operation and then the effects of this heterogeneity on an acoustic propagation is not negligible. Thus, it is necessary to carry out verification experiments for developments of component technologies, while such kind of experiments using liquid sodium may be relatively large-scale experiments. This is why numerical simulation methods are essential for preceding real experiments or filling up the limited number of experimental results. Though various numerical methods have been applied for a wave propagation in liquid sodium, we still do not have a method for verifying on three-dimensional heterogeneity. Moreover, in side of a reactor core being a complex acousto-elastic coupled region, it has also been difficult to simulate such problems with conventional methods. The objective of this study is to solve these 2 points by applying three-dimensional spectral element method. In this paper, our initial results on three-dimensional simulation study on heterogeneous medium (the first point) are shown. For heterogeneity of liquid sodium to be considered, four-dimensional temperature field (three spatial and one temporal dimension) calculated by computational fluid dynamics (CFD) with Large-Eddy Simulation was applied instead of using conventional method (i.e. Gaussian Random field). This three-dimensional numerical
ESD full chip simulation: HBM and CDM requirements and simulation approach
Directory of Open Access Journals (Sweden)
E. Franell
2008-05-01
Full Text Available Verification of ESD safety on full chip level is a major challenge for IC design. Especially phenomena with their origin in the overall product setup are posing a hurdle on the way to ESD safe products. For stress according to the Charged Device Model (CDM, a stumbling stone for a simulation based analysis is the complex current distribution among a huge number of internal nodes leading to hardly predictable voltage drops inside the circuits.
This paper describes an methodology for Human Body Model (HBM simulations with an improved ESD-failure coverage and a novel methodology to replace capacitive nodes within a resistive network by current sources for CDM simulation. This enables a highly efficient DC simulation clearly marking CDM relevant design weaknesses allowing for application of this software both during product development and for product verification.
A New Approach to the ALFA Trigger Simulator
Dziedzic, Bartosz
2016-01-01
The idea and principle of operation of a device which is used to test the trigger system of the ALFA detectors of the ATLAS experiment at the LHC are discussed. A new approach to the application control is presented. The application runs under control of the ArchLinuxARM operating system. Also, the drawbacks of the new solution are discussed.
Modern approaches to accelerator simulation and on-line control
International Nuclear Information System (INIS)
Lee, M.; Clearwater, S.; Theil, E.; Paxson, V.
1987-02-01
COMFORT-PLUS consists of three parts: (1) COMFORT (Control Of Machine Function, ORbits, and Trajectories), which computes the machine lattice functions and transport matrices along a beamline; (2) PLUS (Prediction from Lattice Using Simulation) which finds or compensates for errors in the beam parameters or machine elements; and (3) a highly graphical interface to PLUS. The COMFORT-PLUS package has been developed on a SUN-3 workstation. The structure and use of COMFORT-PLUS are described, and an example of the use of the package is presented
An introduction to statistical computing a simulation-based approach
Voss, Jochen
2014-01-01
A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems. Sampling-based simulation techniques are now an invaluable tool for exploring statistical models. This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods. It also includes some advanced met
Using a Competitive Approach to Improve Military Simulation Artificial Intelligence Design
National Research Council Canada - National Science Library
Stoykov, Sevdalin
2008-01-01
...) design can lead to improvement of the AI solutions used in military simulations. To demonstrate the potential of the competitive approach, ORTS, a real-time strategy game engine, and its competition setup are used...
A novel approach to evaluate and compare computational snow avalanche simulation
Directory of Open Access Journals (Sweden)
J.-T. Fischer
2013-06-01
Full Text Available An innovative approach for the analysis and interpretation of snow avalanche simulation in three dimensional terrain is presented. Snow avalanche simulation software is used as a supporting tool in hazard mapping. When performing a high number of simulation runs the user is confronted with a considerable amount of simulation results. The objective of this work is to establish an objective, model independent framework to evaluate and compare results of different simulation approaches with respect to indicators of practical relevance, providing an answer to the important questions: how far and how destructive does an avalanche move down slope. For this purpose the Automated Indicator based Model Evaluation and Comparison (AIMEC method is introduced. It operates on a coordinate system which follows a given avalanche path. A multitude of simulation runs is performed with the snow avalanche simulation software SamosAT (Snow Avalanche MOdelling and Simulation – Advanced Technology. The variability of pressure-based run out and avalanche destructiveness along the path is investigated for multiple simulation runs, varying release volume and model parameters. With this, results of deterministic simulation software are processed and analysed by means of statistical methods. Uncertainties originating from varying input conditions, model parameters or the different model implementations are assessed. The results show that AIMEC contributes to the interpretation of avalanche simulations with a broad applicability in model evaluation, comparison as well as examination of scenario variations.
Worsnop, Rochelle P.; Bryan, George H.; Lundquist, Julie K.; Zhang, Jun A.
2017-10-01
Offshore wind-energy development is planned for regions where hurricanes commonly occur, such as the USA Atlantic Coast. Even the most robust wind-turbine design (IEC Class I) may be unable to withstand a Category-2 hurricane (hub-height wind speeds >50 m s^{-1}). Characteristics of the hurricane boundary layer that affect the structural integrity of turbines, especially in major hurricanes, are poorly understood, primarily due to a lack of adequate observations that span typical turbine heights (wind profiles of an idealized Category-5 hurricane at high spatial (10 m) and temporal (0.1 s) resolution. By comparison with unique flight-level observations from a field project, we find that a relatively simple configuration of the Cloud Model I model accurately represents the properties of Hurricane Isabel (2003) in terms of mean wind speeds, wind-speed variances, and power spectra. Comparisons of power spectra and coherence curves derived from our hurricane simulations to those used in current turbine design standards suggest that adjustments to these standards may be needed to capture characteristics of turbulence seen within the simulated hurricane boundary layer. To enable improved design standards for wind turbines to withstand hurricanes, we suggest modifications to account for shifts in peak power to higher frequencies and greater spectral coherence at large separations.
Statistical Approaches to Aerosol Dynamics for Climate Simulation
Energy Technology Data Exchange (ETDEWEB)
Zhu, Wei
2014-09-02
In this work, we introduce two general non-parametric regression analysis methods for errors-in-variable (EIV) models: the compound regression, and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer a practical solution when the ratio of the error variances is unknown. Each includes the classic non-parametric regression methods of ordinary least squares, geometric mean regression, and orthogonal regression as special cases. Both methods can be readily generalized to multiple linear regression with two or more random regressors.
A simulated annealing approach to supplier selection aware inventory planning
Turk, Seda; Miller, Simon; Özcan, Ender; John, Robert
2015-01-01
Selection of an appropriate supplier is a crucial and challenging task in the effective management of a supply chain. Also, appropriate inventory management is critical to the success of a supply chain operation. In recent years, there has been a growing interest in the area of selection of an appropriate vendor and creating good inventory planning using supplier selection information. In this paper, we consider both of these tasks in a two-stage approach employing Interval Type-2 Fuzzy Sets ...
An efficient numerical approach to electrostatic microelectromechanical system simulation
International Nuclear Information System (INIS)
Pu, Li
2009-01-01
Computational analysis of electrostatic microelectromechanical systems (MEMS) requires an electrostatic analysis to compute the electrostatic forces acting on micromechanical structures and a mechanical analysis to compute the deformation of micromechanical structures. Typically, the mechanical analysis is performed on an undeformed geometry. However, the electrostatic analysis is performed on the deformed position of microstructures. In this paper, a new efficient approach to self-consistent analysis of electrostatic MEMS in the small deformation case is presented. In this approach, when the microstructures undergo small deformations, the surface charge densities on the deformed geometry can be computed without updating the geometry of the microstructures. This algorithm is based on the linear mode shapes of a microstructure as basis functions. A boundary integral equation for the electrostatic problem is expanded into a Taylor series around the undeformed configuration, and a new coupled-field equation is presented. This approach is validated by comparing its results with the results available in the literature and ANSYS solutions, and shows attractive features comparable to ANSYS. (general)
Energy Technology Data Exchange (ETDEWEB)
DuBois, D. F. (Donald F.); Yin, L. (Lin); Daughton, W. S. (William S.); Bezzerides, B. (Bandel); Dodd, E. S. (Evan S.); Vu, H. X. (Hoanh X.)
2004-01-01
Detailed diagnostics of quasi-2D RPIC simulations of backward stimulated Raman scattering (BSRS), from single speckles under putative NIF conditions, reveal a complex spatio-temporal behavior. The scattered light consists of localized packets, tens of microns in width, traveling toward the laser at an appreciable fraction of the speed of light. Sub pico-second reflectivity pulses occur as these packets leave the system. The LW activity consists of a front traveling with the light packets with a wake of free LWs traveling in the laser direction. The parametric coupling occurs in the front where the scattered light and LW overlap and are strongest. As the light leaves the plasma the LW quickly decays, liberating its trapped electrons. The high frequency part of the |n{sub e}(k,{omega})|{sup 2} spectrum, where n{sub e} is the electron density fluctuation, consists of a narrow streak or straight line with a slope that is the velocity of the parametric front. The time dependence of |n{sub e}(k,t)|{sup 2}, shows that during each pulse the most intense value of k also 'chirps' to higher values, consistent with the k excursions seen in the |n{sub e}(k,{omega})|{sup 2} spectrum. But k does not always return, in the subsequent pulses, to the original parametrically matched value, indicating that, in spite of side loss, the electron distribution function does not return to its original Maxwellian form. Liberated pulses of hot electrons result in down-stream, bump on tail distributions that excite LWs and beam acoustic modes deeper in the plasma. The frequency broadened spectra are consistent with Thomson scatter spectra observed in TRIDENT single-hot-spot experiments in the high k{lambda}{sub D}, trapping regime. Further details including a comparison of results from full PIC simulations, and movies of the spatio-temporal behavior, will be given in the poster by L Yin et al.
Toward Simulating Realistic Pursuit-Evasion Using a Roadmap-Based Approach
Rodriguez, Samuel
2010-01-01
In this work, we describe an approach for modeling and simulating group behaviors for pursuit-evasion that uses a graph-based representation of the environment and integrates multi-agent simulation with roadmap-based path planning. We demonstrate the utility of this approach for a variety of scenarios including pursuit-evasion on terrains, in multi-level buildings, and in crowds. © 2010 Springer-Verlag Berlin Heidelberg.
Radosevic, M.; Hensen, J.L.M.; Wijsman, A.J.T.M.; Hensen, J.L.M.; Lain, M.
2004-01-01
Advanced architectural developments require an integrated approach to design where simulation tools available today deal. only with a small subset of the overall problem. The aim of this study is to enable run time exchange of necessary data at suitable frequency between different simulation
Toward Simulating Realistic Pursuit-Evasion Using a Roadmap-Based Approach
Rodriguez, Samuel; Denny, Jory; Zourntos, Takis; Amato, Nancy M.
2010-01-01
In this work, we describe an approach for modeling and simulating group behaviors for pursuit-evasion that uses a graph-based representation of the environment and integrates multi-agent simulation with roadmap-based path planning. We demonstrate
A Systemic-Constructivist Approach to the Facilitation and Debriefing of Simulations and Games
Kriz, Willy Christian
2010-01-01
This article introduces some basic concepts of a systemic-constructivist perspective. These show that gaming simulation corresponds closely to a systemic-constructivist approach to learning and instruction. Some quality aspects of facilitating and debriefing simulation games are described from a systemic-constructivist point of view. Finally, a…
Varatharajan, I.; D'Amore, M.; Maturilli, A.; Helbert, J.; Hiesinger, H.
2018-04-01
Machine learning approach to spectral unmixing of emissivity spectra of Mercury is carried out using endmember spectral library measured at simulated daytime surface conditions of Mercury. Study supports MERTIS payload onboard ESA/JAXA BepiColombo.
Case-mix reimbursement for nursing home services: Simulation approach
Adams, E. Kathleen; Schlenker, Robert E.
1986-01-01
Nursing home reimbursement based on case mix is a matter of growing interest. Several States either use or are considering this reimbursement method. In this article, we present a method for evaluating key outcomes of such a change for Connecticut nursing homes. A simulation model is used to replicate payments under the case-mix systems used in Maryland, Ohio, and West Virginia. The findings indicate that, compared with the system presently used in Connecticut, these systems would better relate dollar payments to measure patient need, and for-profit homes would benefit relative to nonprofit homes. The Ohio methodology would impose the most additional costs, the West Virginia system would actually be somewhat less expensive in terms of direct patient care payments. PMID:10311776
Comparative evaluation of photovoltaic MPP trackers: A simulated approach
Directory of Open Access Journals (Sweden)
Barnam Jyoti Saharia
2016-12-01
Full Text Available This paper makes a comparative assessment of three popular maximum power point tracking (MPPT algorithms used in photovoltaic power generation. A 120 Wp PV module is taken as reference for the study that is connected to a suitable resistive load by a boost converter. Two profiles of variation of solar insolation at fixed temperature and varying temperature at fixed solar insolation are taken to test the tracking efficiency of three MPPT algorithms based on the perturb and observe (P&O, Fuzzy logic, and Neural Network techniques. MATLAB/SIMULINK simulation software is used for assessment, and the results indicate that the fuzzy logic-based tracker presents better tracking effectiveness to variations in both solar insolation and temperature profiles when compared to P&O technique and Neural Network-based technique.
Application of cellular automata approach for cloud simulation and rendering
Energy Technology Data Exchange (ETDEWEB)
Christopher Immanuel, W. [Department of Physics, Vel Tech High Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Tamil Nadu, Chennai 600 062 (India); Paul Mary Deborrah, S. [Research Department of Physics, The American College, Tamil Nadu, Madurai 625 002 (India); Samuel Selvaraj, R. [Research Department of Physics, Presidency College, Tamil Nadu, Chennai 600 005 (India)
2014-03-15
Current techniques for creating clouds in games and other real time applications produce static, homogenous clouds. These clouds, while viable for real time applications, do not exhibit an organic feel that clouds in nature exhibit. These clouds, when viewed over a time period, were able to deform their initial shape and move in a more organic and dynamic way. With cloud shape technology we should be able in the future to extend to create even more cloud shapes in real time with more forces. Clouds are an essential part of any computer model of a landscape or an animation of an outdoor scene. A realistic animation of clouds is also important for creating scenes for flight simulators, movies, games, and other. Our goal was to create a realistic animation of clouds.
Battery Performance Modelling ad Simulation: a Neural Network Based Approach
Ottavianelli, Giuseppe; Donati, Alessandro
2002-01-01
This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg
Application of cellular automata approach for cloud simulation and rendering
International Nuclear Information System (INIS)
Christopher Immanuel, W.; Paul Mary Deborrah, S.; Samuel Selvaraj, R.
2014-01-01
Current techniques for creating clouds in games and other real time applications produce static, homogenous clouds. These clouds, while viable for real time applications, do not exhibit an organic feel that clouds in nature exhibit. These clouds, when viewed over a time period, were able to deform their initial shape and move in a more organic and dynamic way. With cloud shape technology we should be able in the future to extend to create even more cloud shapes in real time with more forces. Clouds are an essential part of any computer model of a landscape or an animation of an outdoor scene. A realistic animation of clouds is also important for creating scenes for flight simulators, movies, games, and other. Our goal was to create a realistic animation of clouds
Case-mix reimbursement for nursing home services: simulation approach.
Adams, E K; Schlenker, R E
1986-01-01
Nursing home reimbursement based on case mix is a matter of growing interest. Several States either use or are considering this reimbursement method. In this article, we present a method for evaluating key outcomes of such a change for Connecticut nursing homes. A simulation model is used to replicate payments under the case-mix systems used in Maryland, Ohio, and West Virginia. The findings indicate that, compared with the system presently used in Connecticut, these systems would better relate dollar payments to measure patient need, and for-profit homes would benefit relative to nonprofit homes. The Ohio methodology would impose the most additional costs, the West Virginia system would actually be somewhat less expensive in terms of direct patient care payments.
A simulated annealing approach for redesigning a warehouse network problem
Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia
2017-09-01
Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.
Seiffert, Betsy R.; Ducrozet, Guillaume
2018-01-01
We examine the implementation of a wave-breaking mechanism into a nonlinear potential flow solver. The success of the mechanism will be studied by implementing it into the numerical model HOS-NWT, which is a computationally efficient, open source code that solves for the free surface in a numerical wave tank using the high-order spectral (HOS) method. Once the breaking mechanism is validated, it can be implemented into other nonlinear potential flow models. To solve for wave-breaking, first a wave-breaking onset parameter is identified, and then a method for computing wave-breaking associated energy loss is determined. Wave-breaking onset is calculated using a breaking criteria introduced by Barthelemy et al. (J Fluid Mech https://arxiv.org/pdf/1508.06002.pdf, submitted) and validated with the experiments of Saket et al. (J Fluid Mech 811:642-658, 2017). Wave-breaking energy dissipation is calculated by adding a viscous diffusion term computed using an eddy viscosity parameter introduced by Tian et al. (Phys Fluids 20(6): 066,604, 2008, Phys Fluids 24(3), 2012), which is estimated based on the pre-breaking wave geometry. A set of two-dimensional experiments is conducted to validate the implemented wave breaking mechanism at a large scale. Breaking waves are generated by using traditional methods of evolution of focused waves and modulational instability, as well as irregular breaking waves with a range of primary frequencies, providing a wide range of breaking conditions to validate the solver. Furthermore, adjustments are made to the method of application and coefficient of the viscous diffusion term with negligible difference, supporting the robustness of the eddy viscosity parameter. The model is able to accurately predict surface elevation and corresponding frequency/amplitude spectrum, as well as energy dissipation when compared with the experimental measurements. This suggests the model is capable of calculating wave-breaking onset and energy dissipation
Seiffert, Betsy R.; Ducrozet, Guillaume; Bonnefoy, Félicien
2017-11-01
This study investigates a wave-breaking onset criteria to be implemented in the non-linear potential flow solver HOS-NWT. The model is a computationally efficient, open source code, which solves for the free surface in a numerical wave tank using the High-Order Spectral (HOS) method. The goal of this study is to determine the best method to identify the onset of random single and multiple breaking waves over a large domain at the exact time they occur. To identify breaking waves, a breaking onset criteria based on the ratio of local energy flux velocity to the local crest velocity, introduced by Barthelemy et al. (2017) is selected. The breaking parameter is uniquely applied in the numerical model in that calculations of the breaking onset criteria ratio are not made only at the location of the wave crest, but at every point in the domain and at every time step. This allows the model to calculate the onset of a breaking wave the moment it happens, and without knowing anything about the wave a priori. The application of the breaking criteria at every point in the domain and at every time step requires the phase velocity to be calculated instantaneously everywhere in the domain and at every time step. This is achieved by calculating the instantaneous phase velocity using the Hilbert transform and dispersion relation. A comparison between more traditional crest-tracking techniques shows the calculation of phase velocity using Hilbert transform at the location of the breaking wave crest provides a good approximation of crest velocity. The ability of the selected wave breaking criteria to predict single and multiple breaking events in two dimensions is validated by a series of large-scale experiments. Breaking waves are generated by energy focusing and modulational instability methods, with a wide range of primary frequencies. Steep irregular waves which lead to breaking waves, and irregular waves with an energy focusing wave superimposed are also generated. This set of
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
International Nuclear Information System (INIS)
Yamamoto, Toshihiro
2014-01-01
Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed
Angular spectrum approach for fast simulation of pulsed non-linear ultrasound fields
DEFF Research Database (Denmark)
Du, Yigang; Jensen, Henrik; Jensen, Jørgen Arendt
2011-01-01
The paper presents an Angular Spectrum Approach (ASA) for simulating pulsed non-linear ultrasound fields. The source of the ASA is generated by Field II, which can simulate array transducers of any arbitrary geometry and focusing. The non-linear ultrasound simulation program - Abersim, is used...... as the reference. A linear array transducer with 64 active elements is simulated by both Field II and Abersim. The excitation is a 2-cycle sine wave with a frequency of 5 MHz. The second harmonic field in the time domain is simulated using ASA. Pulse inversion is used in the Abersim simulation to remove...... the fundamental and keep the second harmonic field, since Abersim simulates non-linear fields with all harmonic components. ASA and Abersim are compared for the pulsed fundamental and second harmonic fields in the time domain at depths of 30 mm, 40 mm (focal depth) and 60 mm. Full widths at -6 dB (FWHM) are f0...
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Comparison of co-simulation approaches for building and HVAC/R system simulation
Trcka, M.; Wetter, M.; Hensen, J.L.M.; Jiang, Yi
2007-01-01
Appraisal of modern performance-based energy codes, as well as heating, ventilation, airconditioning and refrigeration (HVAC/R) system design require use of an integrated building and system performance simulation program. However, the required scope of the modeling library of such integrated tools
Improving operational anodising process performance using simulation approach
International Nuclear Information System (INIS)
Liong, Choong-Yeun; Ghazali, Syarah Syahidah
2015-01-01
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time
Improving operational anodising process performance using simulation approach
Energy Technology Data Exchange (ETDEWEB)
Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor DE (Malaysia)
2015-10-22
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.
Improving operational anodising process performance using simulation approach
Liong, Choong-Yeun; Ghazali, Syarah Syahidah
2015-10-01
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.
Mesh-based weight window approach for Monte Carlo simulation
International Nuclear Information System (INIS)
Liu, L.; Gardner, R.P.
1997-01-01
The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback
Game-Enhanced Simulation as an Approach to Experiential Learning in Business English
Punyalert, Sansanee
2017-01-01
This dissertation aims to integrate various learning approaches, i.e., multiple literacies, experiential learning, game-enhanced learning, and global simulation, into an extracurricular module, in which it remodels traditional ways of teaching input, specifically, the lexical- and grammatical-only approaches of business English at a private…
McNab, Fiona; Hillebrand, Arjan; Swithenby, Stephen J; Rippon, Gina
2012-01-01
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bands were analyzed in pre-selected time windows of 350-550 and 500-700 ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700 ms for the phonological task and 350-550 ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550 ms for the phonological task and 500-700 ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains.
A Simulation-Based Approach to Training Operational Cultural Competence
Johnson, W. Lewis
2010-01-01
Cultural knowledge and skills are critically important for military operations, emergency response, or any job that involves interaction with a culturally diverse population. However, it is not obvious what cultural knowledge and skills need to be trained, and how to integrate that training with the other training that trainees must undergo. Cultural training needs to be broad enough to encompass both regional (culture-specific) and cross-cultural (culture-general) competencies, yet be focused enough to result in targeted improvements in on-the-job performance. This paper describes a comprehensive instructional development methodology and training technology framework that focuses cultural training on operational needs. It supports knowledge acquisition, skill acquisition, and skill transfer. It supports both training and assessment, and integrates with other aspects of operational skills training. Two training systems will be used to illustrate this approach: the Virtual Cultural Awareness Trainer (VCAT) and the Tactical Dari language and culture training system. The paper also discusses new and emerging capabilities that are integrating cultural competence training more strongly with other aspects of training and mission rehearsal.
Mechatronics by bond graphs an object-oriented approach to modelling and simulation
Damić, Vjekoslav
2015-01-01
This book presents a computer-aided approach to the design of mechatronic systems. Its subject is an integrated modeling and simulation in a visual computer environment. Since the first edition, the simulation software changed enormously, became more user-friendly and easier to use. Therefore, a second edition became necessary taking these improvements into account. The modeling is based on system top-down and bottom-up approach. The mathematical models are generated in a form of differential-algebraic equations and solved using numerical and symbolic algebra methods. The integrated approach developed is applied to mechanical, electrical and control systems, multibody dynamics, and continuous systems. .
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Saeidifar, Maryam; Mirzaei, Hamidreza; Ahmadi Nasab, Navid; Mansouri-Torshizi, Hassan
2017-11-01
The binding ability between a new water-soluble palladium(II) complex [Pd(bpy)(bez-dtc)]Cl (where bpy is 2,2‧-bipyridine and bez-dtc is benzyl dithiocarbamate), as an antitumor agent, and calf thymus DNA was evaluated using various physicochemical methods, such as UV-Vis absorption, Competitive fluorescence studies, viscosity measurement, zeta potential and circular dichroism (CD) spectroscopy. The Pd(II) complex was synthesized and characterized using elemental analysis, molar conductivity measurements, FT-IR, 1H NMR, 13C NMR and electronic spectra studies. The anticancer activity against HeLa cell lines demonstrated lower cytotoxicity than cisplatin. The binding constants and the thermodynamic parameters were determined at different temperatures (300 K, 310 K and 320 K) and shown that the complex can bind to DNA via electrostatic forces. Furthermore, this result was confirmed by the viscosity and zeta potential measurements. The CD spectral results demonstrated that the binding of Pd(II) complex to DNA induced conformational changes in DNA. We hope that these results will provide a basis for further studies and practical clinical use of anticancer drugs.
International Nuclear Information System (INIS)
Okonda, J.J.
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) spectroscopy is an analytical method for identification and quantification of elements in materials by measurement of their spectral energy and intensity. EDXRFS spectroscopic technique involves simultaneous non-invasive acquisition of both fluorescence and scatter spectra from samples for quantitative determination of trace elemental content in complex matrix materials. The objective is develop a chemometric-aided EDXRFS method for rapid diagnosis of cancer and its severity (staging) based on analysis of trace elements (Cu, Zn, Fe, Se and Mn), their speciation and multivariate alterations of the elements in cancerous body tissue samples as cancer biomarkers. The quest for early diagnosis of cancer is based on the fact that early intervention translates to higher survival rate and better quality of life. Chemometric aided EDXRFS cancer diagnostic model has been evaluated as a direct and rapid superior alternative for the traditional quantitative methods used in XRF such as FP method. PCA results of cultured samples indicate that it is possible to characterize cancer at early and late stage of development based on trace elemental profiles
Sabeerali, C. T.; Ajayamohan, R. S.; Giannakis, Dimitrios; Majda, Andrew J.
2017-11-01
An improved index for real-time monitoring and forecast verification of monsoon intraseasonal oscillations (MISOs) is introduced using the recently developed nonlinear Laplacian spectral analysis (NLSA) technique. Using NLSA, a hierarchy of Laplace-Beltrami (LB) eigenfunctions are extracted from unfiltered daily rainfall data from the Global Precipitation Climatology Project over the south Asian monsoon region. Two modes representing the full life cycle of the northeastward-propagating boreal summer MISO are identified from the hierarchy of LB eigenfunctions. These modes have a number of advantages over MISO modes extracted via extended empirical orthogonal function analysis including higher memory and predictability, stronger amplitude and higher fractional explained variance over the western Pacific, Western Ghats, and adjoining Arabian Sea regions, and more realistic representation of the regional heat sources over the Indian and Pacific Oceans. Real-time prediction of NLSA-derived MISO indices is demonstrated via extended-range hindcasts based on NCEP Coupled Forecast System version 2 operational output. It is shown that in these hindcasts the NLSA MISO indices remain predictable out to ˜3 weeks.
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Local Interaction Simulation Approach for Fault Detection in Medical Ultrasonic Transducers
Directory of Open Access Journals (Sweden)
Z. Hashemiyan
2015-01-01
Full Text Available A new approach is proposed for modelling medical ultrasonic transducers operating in air. The method is based on finite elements and the local interaction simulation approach. The latter leads to significant reductions of computational costs. Transmission and reception properties of the transducer are analysed using in-air reverberation patterns. The proposed approach can help to provide earlier detection of transducer faults and their identification, reducing the risk of misdiagnosis due to poor image quality.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Simulation of a weather radar display for over-water airborne radar approaches
Clary, G. R.
1983-01-01
Airborne radar approach (ARA) concepts are being investigated as a part of NASA's Rotorcraft All-Weather Operations Research Program on advanced guidance and navigation methods. This research is being conducted using both piloted simulations and flight test evaluations. For the piloted simulations, a mathematical model of the airborne radar was developed for over-water ARAs to offshore platforms. This simulated flight scenario requires radar simulation of point targets, such as oil rigs and ships, distributed sea clutter, and transponder beacon replies. Radar theory, weather radar characteristics, and empirical data derived from in-flight radar photographs are combined to model a civil weather/mapping radar typical of those used in offshore rotorcraft operations. The resulting radar simulation is realistic and provides the needed simulation capability for ongoing ARA research.
Microscopic approach of the spectral property of 1+ and high-spin states in 124Te nucleus
International Nuclear Information System (INIS)
Shi Zhuyi; Ni Shaoyong; Tong Hong; Zhao Xingzhi
2004-01-01
Using a microscopic sdIBM-2+2q·p· approach, the spectra of the low-spin and partial high-spin states in 124 Te nucleus are relatively successfully calculated. In particular, the 1 1 + , 1 2 + , 3 1 + , 3 2 + and 5 1 + states are successfully reproduced, the energy relationship resulting from this approach identifies that the 6 1 + , 8 1 + and 10 1 + states belong to the aligned states of the two protons. This can explain the recent experimental results that the collective structures may coexist with the single-particle states. So this approach becomes a powerful tool for successfully describing the spectra of general nuclei without clear symmetry and of isotopes located at transitional regions. Finally, the aligned-state structure and the broken-pair energy of the two-quasi-particle are discussed
Directory of Open Access Journals (Sweden)
Jeremy D. Sperling
2013-04-01
Full Text Available Introduction: Simulation-based medical education (SBME is increasingly being utilized for teaching clinical skills in undergraduate medical education. Studies have evaluated the impact of adding SBME to third- and fourth-year curriculum; however, very little research has assessed its efficacy for teaching clinical skills in pre-clerkship coursework. To measure the impact of a simulation exercise during a pre-clinical curriculum, a simulation session was added to a pre-clerkship course at our medical school where the clinical approach to altered mental status (AMS is traditionally taught using a lecture and an interactive case-based session in a small group format. The objective was to measure simulation's impact on students’ knowledge acquisition, comfort, and perceived competence with regards to the AMS patient. Methods: AMS simulation exercises were added to the lecture and small group case sessions in June 2010 and 2011. Simulation sessions consisted of two clinical cases using a high-fidelity full-body simulator followed by a faculty debriefing after each case. Student participation in a simulation session was voluntary. Students who did and did not participate in a simulation session completed a post-test to assess knowledge and a survey to understand comfort and perceived competence in their approach to AMS. Results: A total of 154 students completed the post-test and survey and 65 (42% attended a simulation session. Post-test scores were higher in students who attended a simulation session compared to those who did not (p<0.001. Students who participated in a simulation session were more comfortable in their overall approach to treating AMS patients (p=0.05. They were also more likely to state that they could articulate a differential diagnosis (p=0.03, know what initial diagnostic tests are needed (p=0.01, and understand what interventions are useful in the first few minutes (p=0.003. Students who participated in a simulation session
Energy Technology Data Exchange (ETDEWEB)
El Ouassini, Ayoub [Ecole Polytechnique de Montreal, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: ayoub.el-ouassini@polymtl.ca; Saucier, Antoine [Ecole Polytechnique de Montreal, departement de mathematiques et de genie industriel, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: antoine.saucier@polymtl.ca; Marcotte, Denis [Ecole Polytechnique de Montreal, departement de genie civil, geologique et minier, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: denis.marcotte@polymtl.ca; Favis, Basil D. [Ecole Polytechnique de Montreal, departement de genie chimique, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: basil.favis@polymtl.ca
2008-04-15
We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph.
A hybrid approach to simulate multiple photon scattering in X-ray imaging
International Nuclear Information System (INIS)
Freud, N.; Letang, J.-M.; Babot, D.
2005-01-01
A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or γ-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results
A hybrid approach to simulate multiple photon scattering in X-ray imaging
Energy Technology Data Exchange (ETDEWEB)
Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: nicolas.freud@insa-lyon.fr; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)
2005-01-01
A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.
International Nuclear Information System (INIS)
El Ouassini, Ayoub; Saucier, Antoine; Marcotte, Denis; Favis, Basil D.
2008-01-01
We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph
Abe, Yukie; Kawahara, Chikako; Yamashina, Akira; Tsuboi, Ryoji
2013-01-01
In Japan, nursing education is being reformed to improve nurses' competency. Interest in use of simulation-based education to increase nurses' competency is increasing. To examine the effectiveness of simulation-based education in improving competency of cardiovascular critical care nurses. A training program that consisted of lectures, training in cardiovascular procedures, and scenario simulations was conducted with 24 Japanese nurses working at a university hospital. Participants were allocated to 4 groups, each of which visited 4 zones and underwent scenario simulations that included debriefings during and after the simulations. In each zone, the scenario simulation was repeated and participants assessed their own technical skills by scoring their performance on a rubric. Before and after the simulations, participants also completed a survey that used the Teamwork Activity Inventory in Nursing Scale (TAINS) to assess their nontechnical skills. All the groups showed increased rubric scores after the second simulation compared with the rubric scores obtained after the first simulation, despite differences in the order in which the scenarios were presented. Furthermore, the survey revealed significant increases in scores on the teamwork scale for the following subscale items: "Attitudes of the superior" (P Job satisfaction" (P = .01), and "Confidence as a team member" (P = .004). Our new educational approach of using repeated scenario simulations and TAINS seemed not only to enhance individual nurses' technical skills in critical care nursing but also to improve their nontechnical skills somewhat.
Simulation-based comparison of two approaches frequently used for dynamic contrast-enhanced MRI
International Nuclear Information System (INIS)
Zwick, Stefan; Brix, Gunnar; Tofts, Paul S.; Strecker, Ralph; Kopp-Schneider, Annette; Laue, Hendrik; Semmler, Wolfhard; Kiessling, Fabian
2010-01-01
The purpose was to compare two approaches for the acquisition and analysis of dynamic-contrast-enhanced MRI data with respect to differences in the modelling of the arterial input-function (AIF), the dependency of the model parameters on physiological parameters and their numerical stability. Eight hundred tissue concentration curves were simulated for different combinations of perfusion, permeability, interstitial volume and plasma volume based on two measured AIFs and analysed according to the two commonly used approaches. The transfer constants (Approach 1) K trans and (Approach 2) k ep were correlated with all tissue parameters. K trans showed a stronger dependency on perfusion, and k ep on permeability. The volume parameters (Approach 1) v e and (Approach 2) A were mainly influenced by the interstitial and plasma volume. Both approaches allow only rough characterisation of tissue microcirculation and microvasculature. Approach 2 seems to be somewhat more robust than 1, mainly due to the different methods of CA administration. (orig.)
An Open Source-based Approach to the Development of Research Reactor Simulator
International Nuclear Information System (INIS)
Joo, Sung Moon; Suh, Yong Suk; Park, Cheol Park
2016-01-01
In reactor design, operator training, safety analysis, or research using a reactor, it is essential to simulate time dependent reactor behaviors such as neutron population, fluid flow, and heat transfer. Furthermore, in order to use the simulator to train and educate operators, a mockup of the reactor user interface is required. There are commercial software tools available for reactor simulator development. However, it is costly to use those commercial software tools. Especially for research reactors, it is difficult to justify the high cost as regulations on research reactor simulators are not as strict as those for commercial Nuclear Power Plants(NPPs). An open source-based simulator for a research reactor is configured as a distributed control system based on EPICS framework. To demonstrate the use of the simulation framework proposed in this work, we consider a toy example. This example approximates a 1-second impulse reactivity insertion in a reactor, which represents the instantaneous removal and reinsertion of a control rod. The change in reactivity results in a slightly delayed change in power and corresponding increases in temperatures throughout the system. We proposed an approach for developing research reactor simulator using open source software tools, and showed preliminary results. The results demonstrate that the approach presented in this work can provide economical and viable way of developing research reactor simulators
Surrogate model approach for improving the performance of reactive transport simulations
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2016-04-01
Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines
Directory of Open Access Journals (Sweden)
Yan Li
2017-11-01
Full Text Available Due to the volatile and correlated nature of wind speed, a high share of wind power penetration poses challenges to power system production simulation. Existing power system probabilistic production simulation approaches are in short of considering the time-varying characteristics of wind power and load, as well as the correlation between wind speeds at the same time, which brings about some problems in planning and analysis for the power system with high wind power penetration. Based on universal generating function (UGF, this paper proposes a novel probabilistic production simulation approach considering wind speed correlation. UGF is utilized to develop the chronological models of wind power that characterizes wind speed correlation simultaneously, as well as the chronological models of conventional generation sources and load. The supply and demand are matched chronologically to not only obtain generation schedules, but also reliability indices both at each simulation interval and the whole period. The proposed approach has been tested on the improved IEEE-RTS 79 test system and is compared with the Monte Carlo approach and the sequence operation theory approach. The results verified the proposed approach with the merits of computation simplicity and accuracy.
A novel approach for modelling complex maintenance systems using discrete event simulation
International Nuclear Information System (INIS)
Alrabghi, Abdullah; Tiwari, Ashutosh
2016-01-01
Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.
International Nuclear Information System (INIS)
Ustinov, Eugene A.
2005-01-01
An approach to formulation of inversion algorithms for remote sensing in the thermal spectral region in the case of a scattering planetary atmosphere, based on the adjoint equation of radiative transfer (Ustinov (JQSRT 68 (2001) 195; JQSRT 73 (2002) 29); referred to as Papers 1 and 2, respectively, in the main text), is applied to the general case of retrievals of atmospheric and surface parameters for the scattering atmosphere with nadir viewing geometry. Analytic expressions for corresponding weighting functions for atmospheric parameters and partial derivatives for surface parameters are derived. The case of pure atmospheric absorption with a scattering underlying surface is considered and convergence to results obtained for the non-scattering atmospheres (Ustinov (JQSRT 74 (2002) 683), referred to as Paper 3 in the main text) is demonstrated
Unified Approach to Modeling and Simulation of Space Communication Networks and Systems
Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth
2010-01-01
Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks
Directory of Open Access Journals (Sweden)
Xiong Li
2012-11-01
Full Text Available This paper presents a novel approach towards showing how contractor in agent-based simulation for complex warfare system such as multi-sensor battlefield reconnaissance system can be selected in Contract Net Protocol (CNP with high efficiency. We first analyze agent and agent-based simulation framework, CNP and collaborators, and present agents interaction chain used to actualize CNP and establish agents trust network. We then obtain contractor's importance weight and dynamic trust by presenting fuzzy similarity-based algorithm and trust modifying algorithm, thus we propose contractor selecting approach based on maximum dynamic integrative trust. We validate the feasibility and capability of this approach by implementing simulation, analyzing compared results and checking the model.
B-1 Systems Approach to Training. Simulation Technology Assessment Report (STAR)
1975-07-01
Psychology in the Air Force, 1974. Creelman , J.A., Evaluation of Approach Training Procedures, U.S. Naval School of Aviation Med., Proj. No. NM001-109-107...training. 3.2 PHYSICAL VERSUS PSYCHOLOGICAL SIMULATION In the previous section, the term "physical simulation" was used to represent the case where... psychology that there is no "step function" threshold. Rather, detection capability plotted against phys- ical parameter strength results in an ogival
Simulation-Based Approach to Operating Costs Analysis of Freight Trucking
Directory of Open Access Journals (Sweden)
Ozernova Natalja
2015-12-01
Full Text Available The article is devoted to the problem of costs uncertainty in road freight transportation services. The article introduces the statistical approach, based on Monte Carlo simulation on spreadsheets, to the analysis of operating costs. The developed model gives an opportunity to estimate operating freight trucking costs under different configuration of cost factors. Important conclusions can be made after running simulations regarding sensitivity to different factors, optimal decisions and variability of operating costs.
Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)
1996-01-01
This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.
Directory of Open Access Journals (Sweden)
Arunava Maity
2015-01-01
Full Text Available This paper considers an infinite-buffer queuing system with birth-death modulated Markovian arrival process (BDMMAP with arbitrary service time distribution. BDMMAP is an excellent representation of the arrival process, where the fractal behavior such as burstiness, correlation, and self-similarity is observed, for example, in ethernet LAN traffic systems. This model was first apprised by Nishimura (2003, and to analyze it, he proposed a twofold spectral theory approach. It seems from the investigations that Nishimura’s approach is tedious and difficult to employ for practical purposes. The objective of this paper is to analyze the same model with an alternative methodology proposed by Chaudhry et al. (2013 (to be referred to as CGG method. The CGG method appears to be rather simple, mathematically tractable, and easy to implement as compared to Nishimura’s approach. The Achilles tendon of the CGG method is the roots of the characteristic equation associated with the probability generating function (pgf of the queue length distribution, which absolves any eigenvalue algebra and iterative analysis. Both the methods are presented in stepwise manner for easy accessibility, followed by some illustrative examples in accordance with the context.
Spectral element simulation of ultrafiltration
DEFF Research Database (Denmark)
Hansen, M.; Barker, Vincent A.; Hassager, Ole
1998-01-01
for the unknowns at the mesh nodes. This system is solved via a technique combining the penalty method, Newton-Raphson iterations, static condensation, and a solver for banded linear systems. In addition, a smoothing technique is used to handle a singularity in the boundary condition at the membrane...
Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-02-17
Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new
Towards Faster FEM Simulation of Thin Film Superconductors: A Multiscale Approach
DEFF Research Database (Denmark)
Rodriguez Zermeno, Victor Manuel; Mijatovic, Nenad; Træholt, Chresten
2011-01-01
This work presents a method to simulate the electromagnetic properties of superconductors with high aspect ratio such as the commercially available second generation superconducting YBCO tapes. The method is based on a multiscale representation for both thickness and width of the superconducting...... at considerable lower computational time. Several test cases were simulated including transport current, externally applied magnetic field and a combination of both. The results are in good agreement with recently published numerical simulations. The computational time to solve the present multiscale approach...
Energy Technology Data Exchange (ETDEWEB)
Sugiyama, T; Yamada, T; Noguchi, T [Japan Quality Assurance Organization, Tokyo (Japan)
1997-11-25
Study was made on time-variation of the performance of CSI lamps for solar simulators. In order to accurately evaluate the standard heat collection performance of solar systems in a room, MITI installed an artificial solar light source in the Solar Techno-Center of Japan Quality Assurance Organization for trial use and evaluation. CSI lamp is superior in durability, and can simulate the solar light in the daytime. The light source is composed of 72 metal halide lamps of 1kW arranged in a plane of 3.5times3.5m. The study result on time-variation of a spectral distribution and irradiance by intermittent switching of lamps showed a sufficient durability of 2000h. To ensure the accuracy of a solar heat collector measurement system enough, periodic calibration is being carried out using reference goods. To ensure the reliability and stability for a switching system, periodic maintenance of a power source, stabilizer and electric system is also being carried out in addition to CSI lamps. The stable irradiance and accuracy are being kept by such maintenance and periodic exchange of lamps. 6 figs., 4 tabs.
Estimating oil price 'Value at Risk' using the historical simulation approach
International Nuclear Information System (INIS)
Cabedo, J.D.; Moya, I.
2003-01-01
In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach. developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification. (author)
Estimating oil price 'Value at Risk' using the historical simulation approach
International Nuclear Information System (INIS)
David Cabedo, J.; Moya, Ismael
2003-01-01
In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach, developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification
A Cost-Effective Approach to Hardware-in-the-Loop Simulation
DEFF Research Database (Denmark)
Pedersen, Mikkel Melters; Hansen, M. R.; Ballebye, M.
2012-01-01
This paper presents an approach for developing cost effective hardware-in-the- loop (HIL) simulation platforms for the use in controller software test and development. The approach is aimed at the many smaller manufacturers of e.g. mobile hydraulic machinery, which often do not have very advanced...... testing facilities at their disposal. A case study is presented where a HIL simulation platform is developed for the controller of a truck mounted loader crane. The total expenses in hardware and software is less than 10.000$....
Goh, Yang Miang; Askar Ali, Mohamed Jawad
2016-08-01
One of the key challenges in improving construction safety and health is the management of safety behavior. From a system point of view, workers work unsafely due to system level issues such as poor safety culture, excessive production pressure, inadequate allocation of resources and time and lack of training. These systemic issues should be eradicated or minimized during planning. However, there is a lack of detailed planning tools to help managers assess the impact of their upstream decisions on worker safety behavior. Even though simulation had been used in construction planning, the review conducted in this study showed that construction safety management research had not been exploiting the potential of simulation techniques. Thus, a hybrid simulation framework is proposed to facilitate integration of safety management considerations into construction activity simulation. The hybrid framework consists of discrete event simulation (DES) as the core, but heterogeneous, interactive and intelligent (able to make decisions) agents replace traditional entities and resources. In addition, some of the cognitive processes and physiological aspects of agents are captured using system dynamics (SD) approach. The combination of DES, agent-based simulation (ABS) and SD allows a more "natural" representation of the complex dynamics in construction activities. The proposed hybrid framework was demonstrated using a hypothetical case study. In addition, due to the lack of application of factorial experiment approach in safety management simulation, the case study demonstrated sensitivity analysis and factorial experiment to guide future research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Approaches to the simulation of unconfined flow and perched groundwater flow in MODFLOW
Bedekar, Vivek; Niswonger, Richard G.; Kipp, Kenneth; Panday, Sorab; Tonkin, Matthew
2012-01-01
Various approaches have been proposed to manage the nonlinearities associated with the unconfined flow equation and to simulate perched groundwater conditions using the MODFLOW family of codes. The approaches comprise a variety of numerical techniques to prevent dry cells from becoming inactive and to achieve a stable solution focused on formulations of the unconfined, partially-saturated, groundwater flow equation. Keeping dry cells active avoids a discontinuous head solution which in turn improves the effectiveness of parameter estimation software that relies on continuous derivatives. Most approaches implement an upstream weighting of intercell conductance and Newton-Raphson linearization to obtain robust convergence. In this study, several published approaches were implemented in a stepwise manner into MODFLOW for comparative analysis. First, a comparative analysis of the methods is presented using synthetic examples that create convergence issues or difficulty in handling perched conditions with the more common dry-cell simulation capabilities of MODFLOW. Next, a field-scale three-dimensional simulation is presented to examine the stability and performance of the discussed approaches in larger, practical, simulation settings.
Directory of Open Access Journals (Sweden)
Juhani Latvakoski
2015-07-01
Full Text Available Modern society is facing great challenges due to pollution and increased carbon dioxide (CO2 emissions. As part of solving these challenges, the use of renewable energy sources and electric vehicles (EVs is rapidly increasing. However, increased dynamics have triggered problems in balancing energy supply and consumption demand in the power systems. The resulting uncertainty and unpredictability of energy production, consumption, and management of peak loads has caused an increase in costs for energy market actors. Therefore, the means for studying the balancing of local smart grids with EVs is a starting point for this paper. The main contribution is a simulation-based approach which was developed to enable the study of the balancing of local distribution grids with EV batteries in a cost-efficient manner. The simulation-based approach is applied to enable the execution of a distributed system with the simulation of a local distribution grid, including a number of charging stations and EVs. A simulation system has been constructed to support the simulation-based approach. The evaluation has been carried out by executing the scenario related to balancing local distribution grids with EV batteries in a step-by-step manner. The evaluation results indicate that the simulation-based approach is able to facilitate the evaluation of smart grid– and EV-related communication protocols, control algorithms for charging, and functionalities of local distribution grids as part of a complex, critical cyber-physical system. In addition, the simulation system is able to incorporate advanced methods for monitoring, controlling, tracking, and modeling behavior. The simulation model of the local distribution grid can be executed with the smart control of charging and discharging powers of the EVs according to the load situation in the local distribution grid. The resulting simulation system can be applied to the study of balancing local smart grids with EV
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN
Directory of Open Access Journals (Sweden)
Mario G.C.A. Cimino
2014-05-01
Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.
Directory of Open Access Journals (Sweden)
C. Nabert
2017-05-01
Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.
A continuous-discontinuous approach to simulate failure of quasi-brittle materials
Moonen, P.; Sluys, L.J.; Carmeliet, J.
2009-01-01
A continuous-discontinuous approach to simulate failure is presented. The formulation covers both diffuse damage processes in the bulk material as well as the initiation and propagation of discrete cracks. Comparison with experimental data on layered sandstone shows that the modeling strategy
Saraswat, Satya Prakash; Anderson, Dennis M.; Chircu, Alina M.
2014-01-01
This paper describes the development and evaluation of a graduate level Business Process Management (BPM) course with process modeling and simulation as its integral component, being offered at an accredited business university in the Northeastern U.S. Our approach is similar to that found in other Information Systems (IS) education papers, and…
DEFF Research Database (Denmark)
Hansen, Anders L.; Lund, Erik; Pinho, Silvestre T.
2009-01-01
In this paper a hierarchical FE approach is utilized to simulate delamination in a composite plate loaded in uni-axial compression. Progressive delamination is modelled by use of cohesive interface elements that are automatically embedded. The non-linear problem is solved quasi-statically in whic...
BlueSky ATC Simulator Project : An Open Data and Open Source Approach
Hoekstra, J.M.; Ellerbroek, J.
2016-01-01
To advance ATM research as a science, ATM research results should be made more comparable. A possible way to do this is to share tools and data. This paper presents a project that investigates the feasibility of a fully open-source and open-data approach to air traffic simulation. Here, the first of
Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya
2017-10-01
Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
Energy Technology Data Exchange (ETDEWEB)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S. [Lawrence Berkeley Laboratory, CA (United States)
1995-03-01
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements. The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.
DEFF Research Database (Denmark)
Chivaee, Hamid Sarlak; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming
2012-01-01
Large eddy simulation (LES) of flow in a wind farm is studied in neutral as well as thermally stratified atmospheric boundary layer (ABL). An approach has been practiced to simulate the flow in a fully developed wind farm boundary layer. The approach is based on the Immersed Boundary Method (IBM......) and involves implementation of an arbitrary prescribed initial boundary layer (See [1]). A prescribed initial boundary layer profile is enforced through the computational domain using body forces to maintain a desired flow field. The body forces are then stored and applied on the domain through the simulation...... and the boundary layer shape will be modified due to the interaction of the turbine wakes and buoyancy contributions. The implemented method is capable of capturing the most important features of wakes of wind farms [1] while having the advantage of resolving the wall layer with a coarser grid than typically...
Spatial and spectral effects in subcritical system pulsed experiments
International Nuclear Information System (INIS)
Dulla, S.; Nervo, M.; Ravetto, P.; Carta, M.
2013-01-01
Accurate neutronic models are needed for the interpretation of pulsed experiments in subcritical systems. In this work, the extent of spatial and spectral effects in the pulse propagation phenomena is investigated and the analysis is applied to the GUINEVERE experiment. The multigroup cross section data is generated by the Monte Carlo SERPENT code and the neutronic evolution following the source pulse is simulated by a kinetic diffusion code. The results presented show that important spatial and spectral aspects need to be properly accounted for and that a detailed energy approach may be needed to adequately capture the physical features of the system to the pulse injection. (authors)
Hybrid simulation of reactor kinetics in CANDU reactors using a modal approach
International Nuclear Information System (INIS)
Monaghan, B.M.; McDonnell, F.N.; Hinds, H.W.T.; m.
1980-01-01
A hybrid computer model for simulating the behaviour of large CANDU (Canada Deuterium Uranium) reactor cores is presented. The main dynamic variables are expressed in terms of weighted sums of a base set of spatial natural-mode functions with time-varying co-efficients. This technique, known as the modal or synthesis approach, permits good three-dimensional representation of reactor dynamics and is well suited to hybrid simulation. The hybrid model provides improved man-machine interaction and real-time capability. The model was used in two applications. The first studies the transient that follows a loss of primary coolant and reactor shutdown; the second is a simulation of the dynamics of xenon, a fission product which has a high absorption cross-section for neutrons and thus has an important effect on reactor behaviour. Comparison of the results of the hybrid computer simulation with those of an all-digital one is good, within 1% to 2%
A Fast Electro-Thermal Co-Simulation Modeling Approach for SiC Power MOSFETs
DEFF Research Database (Denmark)
Ceccarelli, Lorenzo; Bahman, Amir Sajjad; Iannuzzo, Francesco
2017-01-01
The purpose of this work is to propose a novel electro-thermal co-simulation approach for the new generation of SiC MOSFETs, by development of a PSpice-based compact and physical SiC MOSFET model including temperature dependency of several parameters and a Simulink-based thermal network. The PSpice...... the FEM simulation of the DUT’s structure, performed in ANSYS Icepack. A MATLAB script is used to process the simulation data and feed the needed settings and parameters back into the simulation. The parameters for a CREE 1.2 kV/30 A SiC MOSFET have been identified and the electro-thermal model has been...
Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach
Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space
Simulation in Quality Management – An Approach to Improve Inspection Planning
Directory of Open Access Journals (Sweden)
H.-A. Crostack
2005-01-01
Full Text Available Production is a multi-step process involving many different articles produced in different jobs by various machining stations. Quality inspection has to be integrated in the production sequence in order to ensure the conformance of the products. The interactions between manufacturing processes and inspections are very complex since three aspects (quality, cost, and time should all be considered at the same time while determining the suitable inspection strategy. Therefore, a simulation approach was introduced to solve this problem.The simulator called QUINTE [the QUINTE simulator has been developed at the University of Dortmund in the course of two research projects funded by the German Federal Ministry of Economics and Labour (BMWA: Bundesministerium für Wirtschaft und Arbeit, the Arbeitsgemeinschaft industrieller Forschungsvereinigungen (AiF, Cologne/Germany and the Forschungsgemeinschaft Qualität, Frankfurt a.M./Germany] was developed to simulate the machining as well as the inspection. It can be used to investigate and evaluate the inspection strategies in manufacturing processes. The investigation into the application of QUINTE simulator in industry was carried out at two pilot companies. The results show the validity of this simulator. An attempt to run QUINTE in a user-friendly environment, i.e., the commercial simulation software – Arena® is also described in this paper.NOTATION: QUINTE Qualität in der Teilefertigung (Quality in the manufacturing process
The simulation of two-dimensional migration patterns - a novel approach
Energy Technology Data Exchange (ETDEWEB)
Villar, Heldio Pereira [Universidade de Pernambuco, Recife, PE (Brazil). Escola Politecnica]|[Centro Regional de Ciencias Nucleares, Recife, PE (Brazil)
1997-12-31
A novel approach to the problem of simulation of two-dimensional migration of solutes in saturated soils is presented. In this approach, the two-dimensional advection-dispersion equation is solved by finite-differences in a stepwise fashion, by employing the one-dimensional solution first in the direction of flow and then perpendicularly, using the same time increment in both cases. As the results of this numerical model were to be verified against experimental results obtained by radioactive tracer experiments, an attenuation factor, to account for the contribution of the gamma rays emitted by the whole plume of tracer to the readings of the adopted radiation detectors, was introduced into the model. The comparison between experimental and simulated concentration contours showed good agreement, thus establishing the feasibility of the approach proposed herein. (author) 6 refs., 6 figs.
Fast simulation approaches for power fluctuation model of wind farm based on frequency domain
DEFF Research Database (Denmark)
Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang
2012-01-01
This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... Under this background, this paper proposes four efficient approaches to accelerate the simulation speed. Two of them are based on physical model simplifications, and the other two improve the numerical computation. The case study demonstrates the efficiency of these approaches. The acceleration ratio...... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....
The simulation of two-dimensional migration patterns - a novel approach
International Nuclear Information System (INIS)
Villar, Heldio Pereira
1997-01-01
A novel approach to the problem of simulation of two-dimensional migration of solutes in saturated soils is presented. In this approach, the two-dimensional advection-dispersion equation is solved by finite-differences in a stepwise fashion, by employing the one-dimensional solution first in the direction of flow and then perpendicularly, using the same time increment in both cases. As the results of this numerical model were to be verified against experimental results obtained by radioactive tracer experiments, an attenuation factor, to account for the contribution of the gamma rays emitted by the whole plume of tracer to the readings of the adopted radiation detectors, was introduced into the model. The comparison between experimental and simulated concentration contours showed good agreement, thus establishing the feasibility of the approach proposed herein. (author)
Spady, A. A., Jr.; Kurbjun, M. C.
1978-01-01
This paper presents an overview of the flight management work being conducted using NASA Langley's oculometer system. Tests have been conducted in a Boeing 737 simulator to investigate pilot scan behavior during approach and landing for simulated IFR, VFR, motion versus no motion, standard versus advanced displays, and as a function of various runway patterns and symbology. Results of each of these studies are discussed. For example, results indicate that for the IFR approaches a difference in pilot scan strategy was noted for the manual versus coupled (autopilot) conditions. Also, during the final part of the approach when the pilot looks out-of-the-window he fixates on his aim or impact point on the runway and holds this point until flare initiation.
Teich, M.; Feistl, T.; Fischer, J.; Bartelt, P.; Bebi, P.; Christen, M.; Grêt-Regamey, A.
2013-12-01
Two-dimensional avalanche simulation software operating in three-dimensional terrain are widely used for hazard zoning and engineering to predict runout distances and impact pressures of snow avalanche events. Mountain forests are an effective biological protection measure; however, the protective capacity of forests to decelerate or even to stop avalanches that start within forested areas or directly above the treeline is seldom considered in this context. In particular, runout distances of small- to medium-scale avalanches are strongly influenced by the structural conditions of forests in the avalanche path. This varying decelerating effect has rarely been addressed or implemented in avalanche simulation. We present an evaluation and operationalization of a novel forest detrainment modeling approach implemented in the avalanche simulation software RAMMS. The new approach accounts for the effect of forests in the avalanche path by detraining mass, which leads to a deceleration and runout shortening of avalanches. The extracted avalanche mass caught behind trees stops immediately and, therefore, is instantly subtracted from the flow and the momentum of the stopped mass is removed from the total momentum of the avalanche flow. This relationship is parameterized by the empirical detrainment coefficient K [Pa] which accounts for the braking power of different forest types per unit area. To define K dependent on specific forest characteristics, we simulated 40 well-documented small- to medium-scale avalanches which released in and ran through forests with varying K-values. Comparing two-dimensional simulation results with one-dimensional field observations for a high number of avalanche events and simulations manually is however time consuming and rather subjective. In order to process simulation results in a comprehensive and standardized way, we used a recently developed automatic evaluation and comparison method defining runout distances based on a pressure
An approach for coupled-code multiphysics core simulations from a common input
International Nuclear Information System (INIS)
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger; Clarno, Kevin; Simunovic, Srdjan; Slattery, Stuart; Turner, John; Palmtag, Scott
2015-01-01
Highlights: • We describe an approach for coupled-code multiphysics reactor core simulations. • The approach can enable tight coupling of distinct physics codes with a common input. • Multi-code multiphysics coupling and parallel data transfer issues are explained. • The common input approach and how the information is processed is described. • Capabilities are demonstrated on an eigenvalue and power distribution calculation. - Abstract: This paper describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak
Simulation and evaluation of urban rail transit network based on multi-agent approach
Directory of Open Access Journals (Sweden)
Xiangming Yao
2013-03-01
Full Text Available Purpose: Urban rail transit is a complex and dynamic system, which is difficult to be described in a global mathematical model for its scale and interaction. In order to analyze the spatial and temporal characteristics of passenger flow distribution and evaluate the effectiveness of transportation strategies, a new and comprehensive method depicted such dynamic system should be given. This study therefore aims at using simulation approach to solve this problem for subway network. Design/methodology/approach: In this thesis a simulation model based on multi-agent approach has been proposed, which is a well suited method to design complex systems. The model includes the specificities of passengers’ travelling behaviors and takes into account of interactions between travelers and trains. Findings: Research limitations/implications: We developed an urban rail transit simulation tool for verification of the validity and accuracy of this model, using real passenger flow data of Beijing subway network to take a case study, results show that our simulation tool can be used to analyze the characteristic of passenger flow distribution and evaluate operation strategies well. Practical implications: The main implications of this work are to provide decision support for traffic management, making train operation plan and dispatching measures in emergency. Originality/value: A new and comprehensive method to analyze and evaluate subway network is presented, accuracy and computational efficiency of the model has been confirmed and meet with the actual needs for large-scale network.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
Energy Technology Data Exchange (ETDEWEB)
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.; /Chicago U., EFI; Alvarez Gonzalez, B.; /Cantabria Inst. of Phys.; Amerio, S.; /INFN, Padua; Amidei, D.; /Michigan U.; Anastassov, A.; /Northwestern U.; Annovi, A.; /Frascati; Antos, J.; /Comenius U.; Apollinari, G.; /Fermilab; Appel, J.; /Fermilab; Apresyan, A.; /Purdue U. /Waseda U.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.
Measurement of the B- lifetime using a simulation free approach for trigger bias correction
International Nuclear Information System (INIS)
2010-01-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B - using the mode B - → D 0 π - . The B - lifetime is measured as τ B# sup -# = 1.663 ± 0.023 ± 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.
Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach
Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.
2000-01-01
The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.
Katiyar, Prateek; Divine, Mathew R; Kohlhofer, Ursula; Quintanilla-Martinez, Leticia; Schölkopf, Bernhard; Pichler, Bernd J; Disselhorst, Jonathan A
2017-04-01
In this study, we described and validated an unsupervised segmentation algorithm for the assessment of tumor heterogeneity using dynamic 18 F-FDG PET. The aim of our study was to objectively evaluate the proposed method and make comparisons with compartmental modeling parametric maps and SUV segmentations using simulations of clinically relevant tumor tissue types. Methods: An irreversible 2-tissue-compartmental model was implemented to simulate clinical and preclinical 18 F-FDG PET time-activity curves using population-based arterial input functions (80 clinical and 12 preclinical) and the kinetic parameter values of 3 tumor tissue types. The simulated time-activity curves were corrupted with different levels of noise and used to calculate the tissue-type misclassification errors of spectral clustering (SC), parametric maps, and SUV segmentation. The utility of the inverse noise variance- and Laplacian score-derived frame weighting schemes before SC was also investigated. Finally, the SC scheme with the best results was tested on a dynamic 18 F-FDG measurement of a mouse bearing subcutaneous colon cancer and validated using histology. Results: In the preclinical setup, the inverse noise variance-weighted SC exhibited the lowest misclassification errors (8.09%-28.53%) at all noise levels in contrast to the Laplacian score-weighted SC (16.12%-31.23%), unweighted SC (25.73%-40.03%), parametric maps (28.02%-61.45%), and SUV (45.49%-45.63%) segmentation. The classification efficacy of both weighted SC schemes in the clinical case was comparable to the unweighted SC. When applied to the dynamic 18 F-FDG measurement of colon cancer, the proposed algorithm accurately identified densely vascularized regions from the rest of the tumor. In addition, the segmented regions and clusterwise average time-activity curves showed excellent correlation with the tumor histology. Conclusion: The promising results of SC mark its position as a robust tool for quantification of tumor
International Nuclear Information System (INIS)
Eggert, F
2010-01-01
This work describes first real automated solution for qualitative evaluation of EDS spectra in X-ray microanalysis. It uses a combination of integrated standardless quantitative evaluation, computation of analytical errors to a final uncertainty, and parts of recently developed simulation approaches. Multiple spectra reconstruction assessments and peak searches of the residual spectrum are powerful enough to solve the qualitative analytical question automatically for totally unknown specimens. The integrated quantitative assessment is useful to improve the confidence of the qualitative analysis. Therefore, the qualitative element analysis becomes a part of integrated quantitative spectrum evaluation, where the quantitative results are used to iteratively refine element decisions, spectrum deconvolution, and simulation steps.
The simulation of solute transport: An approach free of numerical dispersion
International Nuclear Information System (INIS)
Carrera, J.; Melloni, G.
1987-01-01
The applicability of most algorithms for simulation of solute transport is limited either by instability or by numerical dispersion, as seen by a review of existing methods. A new approach is proposed that is free of these two problems. The method is based on the mixed Eulerian-Lagrangian formulation of the mass-transport problem, thus ensuring stability. Advection is simulated by a variation of reverse-particle tracking that avoids the accumulation of interpolation errors, thus preventing numerical dispersion. The algorithm has been implemented in a one-dimensional code. Excellent results are obtained, in comparison with an analytical solution. 36 refs., 14 figs., 1 tab
A unified approach to building accelerator simulation software for the SSC
International Nuclear Information System (INIS)
Paxson, V.; Aragon, C.; Peggs, S.; Saltmarsh, C.; Schachinger, L.
1989-03-01
To adequately simulate the physics and control of a complex accelerator requires a substantial number of programs which must present a uniform interface to both the user and the internal representation of the accelerator. If these programs are to be truly modular, so that their use can be orchestrated as needed, the specification of both their graphical and data interfaces must be carefully designed. We describe the state of such SSC simulation software, with emphasis on addressing these uniform interface needs by using a standardized data set format and object-oriented approaches to graphics and modeling. 12 refs
A kinematic approach for efficient and robust simulation of the cardiac beating motion.
Directory of Open Access Journals (Sweden)
Takashi Ijiri
Full Text Available Computer simulation techniques for cardiac beating motions potentially have many applications and a broad audience. However, most existing methods require enormous computational costs and often show unstable behavior for extreme parameter sets, which interrupts smooth simulation study and make it difficult to apply them to interactive applications. To address this issue, we present an efficient and robust framework for simulating the cardiac beating motion. The global cardiac motion is generated by the accumulation of local myocardial fiber contractions. We compute such local-to-global deformations using a kinematic approach; we divide a heart mesh model into overlapping local regions, contract them independently according to fiber orientation, and compute a global shape that satisfies contracted shapes of all local regions as much as possible. A comparison between our method and a physics-based method showed that our method can generate motion very close to that of a physics-based simulation. Our kinematic method has high controllability; the simulated ventricle-wall-contraction speed can be easily adjusted to that of a real heart by controlling local contraction timing. We demonstrate that our method achieves a highly realistic beating motion of a whole heart in real time on a consumer-level computer. Our method provides an important step to bridge a gap between cardiac simulations and interactive applications.
An applied artificial intelligence approach towards assessing building performance simulation tools
Energy Technology Data Exchange (ETDEWEB)
Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)
2008-07-01
With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Festa, G.; Vilotte, J.; Scala, A.
2012-12-01
The M 9.0, 2011 Tohoku earthquake, along the North American-Pacific plate boundary, East of the Honshu Island, yielded a complex broadband rupture extending southwards over 600 km along strike and triggering a large tsunami that ravaged the East coast of North Japan. Strong motion and high-rate continuous GPS data, recorded all along the Japanese archipelago by the national seismic networks K-Net and Kik-net and geodetic network Geonet, together with teleseismic data, indicated a complex frequency dependent rupture. Low frequency signals (fmeters), extending along-dip over about 100 km, between the hypocenter and the trench, and 150 to 200 km along strike. This slip asperity was likely the cause of the localized tsunami source and of the large amplitude tsunami waves. High-frequency signals (f>0.5 Hz) were instead generated close to the coast in the deeper part of the subduction zone, by at least four smaller size asperities, with possible repeated slip, and were mostly the cause for the ground shaking felt in the Eastern part of Japan. The deep origin of the high-frequency radiation was also confirmed by teleseismic high frequency back projection analysis. Intermediate frequency analysis showed a transition between the shallow and deeper part of the fault, with the rupture almost confined in a small stripe containing the hypocenter before propagating southward along the strike, indicating a predominant in-plane rupture mechanism in the initial stage of the rupture itself. We numerically investigate the role of the geometry of the subduction interface and of the structural properties of the subduction zone on the broadband dynamic rupture and radiation of the Tohoku earthquake. Based upon the almost in-plane behavior of the rupture in its initial stage, 2D non-smooth spectral element dynamic simulations of the earthquake rupture propagation are performed including the non planar and kink geometry of the subduction interface, together with bi-material interfaces
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
Van der Vegte, W.F.
2006-01-01
In this paper, approaches for artifact-behavior simulation are reviewed. The motivation behind the survey is to explore available knowledge for the development of a new form of computer support for conceptual design to simulate use processes of consumer durables. The survey covers the simulation of
Fast simulation of non-linear pulsed ultrasound fields using an angular spectrum approach
DEFF Research Database (Denmark)
Du, Yigang; Jensen, Jørgen Arendt
2013-01-01
A fast non-linear pulsed ultrasound field simulation is presented. It is implemented based on an angular spectrum approach (ASA), which analytically solves the non-linear wave equation. The ASA solution to the Westervelt equation is derived in detail. The calculation speed is significantly...... increased compared to a numerical solution using an operator splitting method (OSM). The ASA has been modified and extended to pulsed non-linear ultrasound fields in combination with Field II, where any array transducer with arbitrary geometry, excitation, focusing and apodization can be simulated...... with a center frequency of 5 MHz. The speed is increased approximately by a factor of 140 and the calculation time is 12 min with a standard PC, when simulating the second harmonic pulse at the focal point. For the second harmonic point spread function the full width error is 1.5% at 6 dB and 6.4% at 12 d...
Least squares approach for initial data recovery in dynamic data-driven applications simulations
Douglas, C.
2010-12-01
In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.
A probabilistic approach for debris impact risk with numerical simulations of debris behaviors
International Nuclear Information System (INIS)
Kihara, Naoto; Matsuyama, Masafumi; Fujii, Naoki
2013-01-01
We propose a probabilistic approach for evaluating the impact risk of tsunami debris through Monte Carlo simulations with a combined system comprising a depth-averaged two-dimensional shallow water model and a discrete element model customized to simulate the motions of floating objects such as vessels. In the proposed method, first, probabilistic tsunami hazard analysis is carried out, and the exceedance probability of tsunami height and numerous tsunami time series for various hazard levels on the offshore side of a target site are estimated. Second, a characteristic tsunami time series for each hazard level is created by cluster analysis. Third, using the Monte Carlo simulation model the debris impact probability with the buildings of interest and the exceedance probability of debris impact speed are evaluated. (author)
Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach
Directory of Open Access Journals (Sweden)
Sungwook Kim
2014-01-01
Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
Microstructural and magnetic properties of thin obliquely deposited films: A simulation approach
Energy Technology Data Exchange (ETDEWEB)
Solovev, P.N., E-mail: platon.solovev@gmail.com [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Izotov, A.V. [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Belyaev, B.A. [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Reshetnev Siberian State Aerospace University, 31, pr. Imeni Gazety “Krasnoyarskii Rabochii”, Krasnoyarsk 660014 (Russian Federation)
2017-05-01
The relation between microstructural and magnetic properties of thin obliquely deposited films has been studied by means of numerical techniques. Using our developed simulation code based on ballistic deposition model and Fourier space approach, we have investigated dependences of magnetometric tensor components and magnetic anisotropy parameters on the deposition angle of the films. A modified Netzelmann approach has been employed to study structural and magnetic parameters of an isolated column in the samples with tilted columnar microstructure. Reliability and validity of used numerical methods is confirmed by a good agreement of the calculation results with each other, as well as with our experimental data obtained by the ferromagnetic resonance measurements of obliquely deposited thin Ni{sub 80}Fe{sub 20} films. The combination of these numerical methods can be used to design a magnetic film with a desirable value of uniaxial magnetic anisotropy and to extract the obliquely deposited film structure from only magnetic measurements. - Highlights: • We present a simulation approach to study a relation between structural and magnetic properties of oblique films. • The calculated dependence of magnetic anisotropy on a deposition angle accords well with the experiment. • A modified Netzelmann approach is proposed. • It allows for the computation of magnetic and structural parameters of an isolated column. • Proposed approach can be used for theoretical studies and for characterization of oblique films.
Directory of Open Access Journals (Sweden)
Z. Hashemiyan
2016-01-01
Full Text Available Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort.
Approaches to simulate channel and fuel behaviour using CATHENA and ELOCA
International Nuclear Information System (INIS)
Sabourin, G.; Huynh, H.M.
1996-01-01
This paper documents a new approach where the detailed fuel and channel thermalhydraulic calculations are performed by an integrated code. The thermalhydraulic code CATHENA is coupled with the fuel code ELOCA. The scenario used in the simulations is a 100% pump suction break, because its power pulse is large and leads to high sheath temperatures. The results shows that coupling the two codes at each time step can have an important effect on parameters such as the sheath, fuel and pressure tube temperature. In summary, this demonstrates that this original approach can model more adequately the channel and fuel behaviour under postulated large LOCAs. (author)
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Engineering and training simulators: A combined approach for nuclear plant construction projects
International Nuclear Information System (INIS)
Harnois, Olivier; Gain, Pascal; Bartak, Jan; Gathmann, Ralf
2007-01-01
Full text: Simulation technologies have always been widely used on nuclear applications, but with a clear division between engineering application, using highly validated code run in batch mode, and training purpose where real time computation is a mandatory requirement. Thanks to the flexibility of modern simulation technology and the increased performance of computers, it becomes now possible to develop Nuclear Power plant simulators that can be used both for engineering and training purposes. In the last years, the revival of nuclear industry raised a number of new construction or plant finishing projects in which the application of this combined approach would result in decisive improvement on plant construction lead times, better project control and cost optimizations. The simulator development is to be executed in a step-wise approach, scheduled in parallel with the plant design and construction phases. During a first step, the simulator will model the plant nuclear island systems plus the corresponding instrumentation and control, specific malfunctions and local commands. It can then be used for engineering activities defining and validating the plant operating strategies in case of incidents or accidents. The Simulator executive Station and Operator Station will be in prototype version with an interface imagery enabling monitoring and control of the simulator. Availability of such simulation platform leads to a significant increase in efficiency of the engineering works, the possibility to validate basic design hypotheses and detect defects and conflicts early. The second phase will consist in the fully detailed simulation of Main Control Room plant supervision and control MMI, taking into account I and C control loops detailed design improvement, while having sufficient fidelity in order to be suitable for the future operator training. Its use will enable the engineering units not only to specify and validate normal, incident and accident detailed plant
An open, object-based modeling approach for simulating subsurface heterogeneity
Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.
2017-12-01
Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...
Freebairn, Louise; Rychetnik, Lucie; Atkinson, Jo-An; Kelly, Paul; McDonnell, Geoff; Roberts, Nick; Whittall, Christine; Redman, Sally
2017-10-02
Evidence-based decision-making is an important foundation for health policy and service planning decisions, yet there remain challenges in ensuring that the many forms of available evidence are considered when decisions are being made. Mobilising knowledge for policy and practice is an emergent process, and one that is highly relational, often messy and profoundly context dependent. Systems approaches, such as dynamic simulation modelling can be used to examine both complex health issues and the context in which they are embedded, and to develop decision support tools. This paper reports on the novel use of participatory simulation modelling as a knowledge mobilisation tool in Australian real-world policy settings. We describe how this approach combined systems science methodology and some of the core elements of knowledge mobilisation best practice. We describe the strategies adopted in three case studies to address both technical and socio-political issues, and compile the experiential lessons derived. Finally, we consider the implications of these knowledge mobilisation case studies and provide evidence for the feasibility of this approach in policy development settings. Participatory dynamic simulation modelling builds on contemporary knowledge mobilisation approaches for health stakeholders to collaborate and explore policy and health service scenarios for priority public health topics. The participatory methods place the decision-maker at the centre of the process and embed deliberative methods and co-production of knowledge. The simulation models function as health policy and programme dynamic decision support tools that integrate diverse forms of evidence, including research evidence, expert knowledge and localised contextual information. Further research is underway to determine the impact of these methods on health service decision-making.
Comparision by Simulation of Different Approaches to the Urban Traffic Control
Czech Academy of Sciences Publication Activity Database
Přikryl, Jan; Tichý, T.; Bělinová, Z.; Kapitán, J.
2012-01-01
Roč. 5, č. 4 (2012), s. 26-30 ISSN 1899-8208 R&D Projects: GA TA ČR TA01030603 Institutional support: RVO:67985556 Keywords : traffic * ITS * telematics * urban traffic control Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-comparision by simulation of different approaches to the urban traffic control.pdf
Energy Technology Data Exchange (ETDEWEB)
Guedes, Solange da Silva
1998-07-01
Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)
A novel approach to simulate gene-environment interactions in complex diseases
Directory of Open Access Journals (Sweden)
Nicodemi Mario
2010-01-01
Full Text Available Abstract Background Complex diseases are multifactorial traits caused by both genetic and environmental factors. They represent the major part of human diseases and include those with largest prevalence and mortality (cancer, heart disease, obesity, etc.. Despite a large amount of information that has been collected about both genetic and environmental risk factors, there are few examples of studies on their interactions in epidemiological literature. One reason can be the incomplete knowledge of the power of statistical methods designed to search for risk factors and their interactions in these data sets. An improvement in this direction would lead to a better understanding and description of gene-environment interactions. To this aim, a possible strategy is to challenge the different statistical methods against data sets where the underlying phenomenon is completely known and fully controllable, for example simulated ones. Results We present a mathematical approach that models gene-environment interactions. By this method it is possible to generate simulated populations having gene-environment interactions of any form, involving any number of genetic and environmental factors and also allowing non-linear interactions as epistasis. In particular, we implemented a simple version of this model in a Gene-Environment iNteraction Simulator (GENS, a tool designed to simulate case-control data sets where a one gene-one environment interaction influences the disease risk. The main aim has been to allow the input of population characteristics by using standard epidemiological measures and to implement constraints to make the simulator behaviour biologically meaningful. Conclusions By the multi-logistic model implemented in GENS it is possible to simulate case-control samples of complex disease where gene-environment interactions influence the disease risk. The user has full control of the main characteristics of the simulated population and a Monte
International Nuclear Information System (INIS)
Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter
2012-01-01
This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.
Energy Technology Data Exchange (ETDEWEB)
Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)
2012-12-15
This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.
Directory of Open Access Journals (Sweden)
Ye. S. Sherina
2014-01-01
Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.
System-of-Systems Approach for Integrated Energy Systems Modeling and Simulation: Preprint
Energy Technology Data Exchange (ETDEWEB)
Mittal, Saurabh; Ruth, Mark; Pratt, Annabelle; Lunacek, Monte; Krishnamurthy, Dheepak; Jones, Wesley
2015-08-21
Today’s electricity grid is the most complex system ever built—and the future grid is likely to be even more complex because it will incorporate distributed energy resources (DERs) such as wind, solar, and various other sources of generation and energy storage. The complexity is further augmented by the possible evolution to new retail market structures that provide incentives to owners of DERs to support the grid. To understand and test new retail market structures and technologies such as DERs, demand-response equipment, and energy management systems while providing reliable electricity to all customers, an Integrated Energy System Model (IESM) is being developed at NREL. The IESM is composed of a power flow simulator (GridLAB-D), home energy management systems implemented using GAMS/Pyomo, a market layer, and hardware-in-the-loop simulation (testing appliances such as HVAC, dishwasher, etc.). The IESM is a system-of-systems (SoS) simulator wherein the constituent systems are brought together in a virtual testbed. We will describe an SoS approach for developing a distributed simulation environment. We will elaborate on the methodology and the control mechanisms used in the co-simulation illustrated by a case study.
International Nuclear Information System (INIS)
Fawley, William M.; Vay, Jean-Luc
2010-01-01
Numerical simulation of some systems containing charged particles with highly relativistic directed motion can by speeded up by orders of magnitude by choice of the proper Lorentz-boosted frame. Orders of magnitude speedup has been demonstrated for simulations from first principles of laser-plasma accelerator, free electron laser, and particle beams interacting with electron clouds. Here we address the application of the Lorentz-boosted frame approach to coherent synchrotron radiation (CSR), which can be strongly present in bunch compressor chicanes. CSR is particularly relevant to the next generation of x-ray light sources and is simultaneously difficult to simulate in the lab frame because of the large ratio of scale lengths. It can increase both the incoherent and coherent longitudinal energy spread, effects that often lead to an increase in transverse emittance. We have adapted the WARP code to simulate CSR emission along a simple dipole bend. We present some scaling arguments for the possible computational speed up factor in the boosted frame and initial 3D simulation results.
Numerical and experimental approaches to simulate soil clogging in porous media
Kanarska, Yuliya; LLNL Team
2012-11-01
Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. The Department of Homeland Security Science and Technology Directorate provided funding for this research.
The Development of a 3D LADAR Simulator Based on a Fast Target Impulse Response Generation Approach
Al-Temeemy, Ali Adnan
2017-09-01
A new laser detection and ranging (LADAR) simulator has been developed, using MATLAB and its graphical user interface, to simulate direct detection time of flight LADAR systems, and to produce 3D simulated scanning images under a wide variety of conditions. This simulator models each stage from the laser source to data generation and can be considered as an efficient simulation tool to use when developing LADAR systems and their data processing algorithms. The novel approach proposed for this simulator is to generate the actual target impulse response. This approach is fast and able to deal with high scanning requirements without losing the fidelity that accompanies increments in speed. This leads to a more efficient LADAR simulator and opens up the possibility for simulating LADAR beam propagation more accurately by using a large number of laser footprint samples. The approach is to select only the parts of the target that lie in the laser beam angular field by mathematically deriving the required equations and calculating the target angular ranges. The performance of the new simulator has been evaluated under different scanning conditions, the results showing significant increments in processing speeds in comparison to conventional approaches, which are also used in this study as a point of comparison for the results. The results also show the simulator's ability to simulate phenomena related to the scanning process, for example, type of noise, scanning resolution and laser beam width.
Spectral mapping of thermal conductivity through nanoscale ballistic transport
Hu, Yongjie; Zeng, Lingping; Minnich, Austin J.; Dresselhaus, Mildred S.; Chen, Gang
2015-08-01
Controlling thermal properties is central to many applications, such as thermoelectric energy conversion and the thermal management of integrated circuits. Progress has been made over the past decade by structuring materials at different length scales, but a clear relationship between structure size and thermal properties remains to be established. The main challenge comes from the unknown intrinsic spectral distribution of energy among heat carriers. Here, we experimentally measure this spectral distribution by probing quasi-ballistic transport near nanostructured heaters down to 30 nm using ultrafast optical spectroscopy. Our approach allows us to quantify up to 95% of the total spectral contribution to thermal conductivity from all phonon modes. The measurement agrees well with multiscale and first-principles-based simulations. We further demonstrate the direct construction of mean free path distributions. Our results provide a new fundamental understanding of thermal transport and will enable materials design in a rational way to achieve high performance.
Ngada, Narcisse
2015-06-15
The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.
Somers, B.; Asner, G. P.
2014-09-01
The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among co-existing species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection in Multiple Endmember Spectral Mixture Analysis (MESMA). The temporal analysis provided a way to incorporate species phenology while feature selection indicated the best phenological time and best spectral feature set to optimize the separability between tree species. Instead of using the same set of spectral bands throughout the image which is the standard approach in MESMA, our modified Wavelength Adaptive Spectral Mixture Analysis (WASMA) approach allowed the spectral subsets to vary on a per pixel basis. As such we were able to optimize the spectral separability between the tree species present in each pixel. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively assessed using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, WASMA provided a more accurate tree species map compared to conventional MESMA (Kappa = 0.54; p-value < 0.05. The flexible or adaptive use of band sets in WASMA provides an interesting avenue to address spectral similarities in complex vegetation canopies.
A new approach to incorporate operator actions in the simulation of accident sequences
International Nuclear Information System (INIS)
Antonio Exposito; Juan Antonio Quiroga; Javier Hortal; John-Einar Hulsund
2006-01-01
Full text of publication follows: Nowadays, simulation-based human reliability analysis (HRA) methods seem to provide a new direction for the development of advanced methodologies to study operator actions effect during accident sequences. Due to this, the Spanish Nuclear Safety Council (CSN) started a working group which has, among other objectives, to develop such simulation-based HRA methodology. As a result of its activities, a new methodology, named Integrated Safety Assessment (ISA), has been developed and is currently being incorporated into licensing activities at CSN. One of the key aspects of this approach is the incorporation of the capability to simulate operator actions, expanding the ISA methodology scopes to make HRA studies. For this reason, CSN is involved in several activities oriented to develop a new tool, which must be able to incorporate operator actions in conventional thermohydraulic (TH) simulations. One of them is the collaboration project between CSN, Halden Reactor Project (HRP) and the Department of Energy Systems (DSE) of the Polytechnic University of Madrid that started in 2003. The basic aim of the project is to develop a software tool that consists of a closed-loop plant/operator simulator, a thermal hydraulic (TH) code for simulating the plant transient and the procedures processor to give the information related with operator actions to the TH code, both coupled by a data communication system which allows the information exchange. For the plant simulation we have a plant transient simulator code (TRETA/TIZONA for PWR/BWR NPPs respectively), developed by the CSN, with PWR/BWR full scope models. The functionality of these thermalhydraulic codes has been expanded, allowing control the overall information flow between coupled codes, simulating the TH transient and determining when the operator actions must be considered. In the other hand, we have the COPMA-III code, a computerized procedure system able to manage XML operational
A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems
Foit, K.; Gwiazda, A.; Banaś, W.
2016-08-01
The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.
International Nuclear Information System (INIS)
Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan
2012-01-01
In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.
Moon, Gi Jong; Yang, Yu Dong; Oh, Jung Min; Kang, In Seok
2017-11-01
Osmotic pressure plays an important role in the processes of charging and discharging of lithium batteries. In this work, osmotic pressure of the ionic liquids confined inside a nanoslit is calculated by using both MD simulation and continuum approach. In the case of MD simulation, an ionic liquid is modeled as singly charged spheres with a short-ranged repulsive Lennard-Jones potential. The radii of the spheres are 0.5nm, reflecting the symmetry of ion sizes for simplicity. The simulation box size is 11nm×11nm×7.5nm with 1050 ion pairs. The concentration of ionic liquid is about 1.922mol/L, and the total charge on an individual wall varies from +/-60e(7.944 μm/cm2) to +/-600e(79.44 μm/cm2) . In the case of continuum approach, we classify the problems according to the correlation length and steric factor, and considered the four separate cases: 1) zero correlation length and zero steric factor, 2) zero correlation length and non-zero steric factor, 3) non-zero correlation length and zero steric factor, and 4) non-zero correlation and non-zero steric factor. Better understanding of the osmotic pressure of ionic liquids confined inside a nanoslit can be achieved by comparing the results of MD simulation and continuum approach. This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP: Ministry of Science, ICT & Future Planning) (No. 2017R1D1A1B05035211).
A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation
International Nuclear Information System (INIS)
George-Williams, Hindolo; Patelli, Edoardo
2016-01-01
Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system's failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability. - Highlights: • A discrete event simulation model based on load flow principles. • Model does not require system path or cut sets. • Applicable to binary and multi-state systems of any topology. • Supports multiple output systems with competing demand. • Model is intuitive and generally applicable.
CATHARE Approach Recommended by EDF/SEPTEN for Training (or other) Simulators
International Nuclear Information System (INIS)
Pentori, B.; Iffeneckeft, F.; Poizat, F.
1999-01-01
This paper describes EDF's approach to NSSS thermal-hydraulics - this is the crucial module in a real-time simulator (this constraint relaxes requirements in respect of neutronics) because it determines the simulator's scope of application. The approach has involved several stages: (1) Existing full-scalers (1980-85 design), equipped with a five-equation primary model (about 40 nodes), coupled with a three-equation axial model of the SG secondary side (plus a very simple model for refilling/venting and draining), which can simulate only a small, 2-inch LOCA and up to 15 bar primary-system pressure; (2) SIPA(CT) and the new full-scalers at Fessenheim and Bugey (1990-95 design). These tools feature Cathare-Simu, an outgrowth of CATHARE 1 (six primary-system equations, four secondary-side equations, at least 187 nodes - extended to the steam header, implicit digital processing, possible parallelisation): this model permits simulation of breaks of up to 12 inches and at very low primary-system pressure; (3) SCAR (1995-2000 design) will be adapted from the CATHARE 2 design code (six equations everywhere, non condensables, 2D and 3D modules), and will allow simulator processing of all operating conditions (except for a severe accident, in the strict sense of core melt), including scenarios based on 481 broken primary piping, at atmospheric pressure. Only the fine-modelling capabilities of CATHARE make it possible to add genuine echographies to the traditional Man Machine Interface. (author)
Towards socio-material approaches in simulation-based education: lessons from complexity theory.
Fenwick, Tara; Dahlgren, Madeleine Abrandt
2015-04-01
Review studies of simulation-based education (SBE) consistently point out that theory-driven research is lacking. The literature to date is dominated by discourses of fidelity and authenticity - creating the 'real' - with a strong focus on the developing of clinical procedural skills. Little of this writing incorporates the theory and research proliferating in professional studies more broadly, which show how professional learning is embodied, relational and situated in social - material relations. A key concern for medical educators concerns how to better prepare students for the unpredictable and dynamic ambiguity of professional practice; this has stimulated the movement towards socio-material theories in education that address precisely this question. Among the various socio-material theories that are informing new developments in professional education, complexity theory has been of particular importance for medical educators interested in updating current practices. This paper outlines key elements of complexity theory, illustrated with examples from empirical study, to argue its particular relevance for improving SBE. Complexity theory can make visible important material dynamics, and their problematic consequences, that are not often noticed in simulated experiences in medical training. It also offers conceptual tools that can be put to practical use. This paper focuses on concepts of emergence, attunement, disturbance and experimentation. These suggest useful new approaches for designing simulated settings and scenarios, and for effective pedagogies before, during and following simulation sessions. Socio-material approaches such as complexity theory are spreading through research and practice in many aspects of professional education across disciplines. Here, we argue for the transformative potential of complexity theory in medical education using simulation as our focus. Complexity tools open questions about the socio-material contradictions inherent in
A simulation of the Upper San Fernando dam using a synthesized approach
International Nuclear Information System (INIS)
Beaty, M.H.; Byrne, P.M.
1999-01-01
A mechanics-based approach to assessing post-liquefaction displacements in slopes is discussed. The approach, which involves approximation of soil behaviour by using numerical models, is derived from total stress procedures and is said to have two major advantages: (1) it combines the triggering and post-liquefaction response into one analysis, and (2) it improves the modeling of post-liquefaction element behaviour. Application of the approach is demonstrated through the simulation of the response of the Upper San Fernando dam to the 1971 San Fernando earthquake. Results were compared to the Bartlett and Youd empirical procedure and were found to agree with expectations reasonably well. Viscous damping, blowcount, and residual strength in simple shear were found to be the key variables. Some questions still remain to be answered regarding some of the input parameters, particularly the viscous damping coefficients. Research to further elucidate the mechanism is continuing. 21 refs., 19 figs
Energy Technology Data Exchange (ETDEWEB)
Camera, S. [Jodrell Bank Centre for Astrophysics, The University of Manchester, Alan Turing Building, Oxford Road, Manchester M13 9PL (United Kingdom); Fornasa, M. [School of Physics and Astronomy, University of Nottingham, University Campus, Nottingham NG7 2RD (United Kingdom); Fornengo, N.; Regis, M., E-mail: stefano.camera@manchester.ac.uk, E-mail: fornasam@gmail.com, E-mail: fornengo@to.infn.it, E-mail: regis@to.infn.it [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, 10125 Torino (Italy)
2015-06-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5–2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates.
Camera, S.; Fornasa, M.; Fornengo, N.; Regis, M.
2015-06-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5-2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates.
International Nuclear Information System (INIS)
Camera, S.; Fornasa, M.; Fornengo, N.; Regis, M.
2015-01-01
We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5–2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates
International Nuclear Information System (INIS)
Kristof, Marian; Kliment, Tomas; Petruzzi, Alessandro; Lipka, Jozef
2009-01-01
Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.
Evaluation of various modelling approaches in flood routing simulation and flood area mapping
Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe
2016-04-01
An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.
Approaches to simulating the “March of Bricks and Mortar”
Goldstein, Noah Charles; Candau, J.T.; Clarke, K.C.
2004-01-01
Re-creation of the extent of urban land use at different periods in time is valuable for examining how cities grow and how policy changes influence urban dynamics. To date, there has been little focus on the modeling of historical urban extent (other than for ancient cities). Instead, current modeling research has emphasized simulating the cities of the future. Predictive models can provide insights into urban growth processes and are valuable for land-use and urban planners, yet historical trends are largely ignored. This is unfortunate since historical data exist for urban areas and can be used to quantitatively test dynamic models and theory. We maintain that understanding the growth dynamics of a region's past allows more intelligent forecasts of its future. We compare using a spatio-temporal interpolation method with an agent-based simulation approach to recreate the urban extent of Santa Barbara, California, annually from 1929 to 2001. The first method uses current yet incomplete data on the construction of homes in the region. The latter uses a Cellular Automata based model, SLEUTH, to back- or hind-cast the urban extent. The success at historical urban growth reproduction of the two approaches used in this work was quantified for comparison. The performance of each method is described, as well as the utility of each model in re-creating the history of Santa Barbara. Additionally, the models’ assumptions about space are contrasted. As a consequence, we propose that both approaches are useful in historical urban simulations, yet the cellular approach is more flexible as it can be extended for spatio-temporal extrapolation.
Riley, Donald R.; Brandon, Jay M.; Glaab, Louis J.
1994-01-01
A six-degree-of-freedom nonlinear simulation of a twin-pusher, turboprop business/commuter aircraft configuration representative of the Cessna ATPTB (Advanced turboprop test bed) was developed for use in piloted studies with the Langley General Aviation Simulator. The math models developed are provided, simulation predictions are compared with with Cessna flight-test data for validation purposes, and results of a handling quality study during simulated ILS (instrument landing system) approaches and missed approaches are presented. Simulated flight trajectories, task performance measures, and pilot evaluations are presented for the ILS approach and missed-approach tasks conducted with the vehicle in the presence of moderate turbulence, varying horizontal winds and engine-out conditions. Six test subjects consisting of two research pilots, a Cessna test pilot, and three general aviation pilots participated in the study. This effort was undertaken in cooperation with the Cessna Aircraft Company.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-11-21
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.
International Nuclear Information System (INIS)
Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)
Miyasaka, Kiyoyuki W; Buchholz, Joseph; LaMarra, Denise; Karakousis, Giorgos C; Aggarwal, Rajesh
2015-01-01
Contemporary demands on resident education call for integration of simulation. We designed and implemented a simulation-based curriculum for Post Graduate Year 1 surgery residents to teach technical and nontechnical skills within a clinical pathway approach for a foregut surgery patient, from outpatient visit through surgery and postoperative follow-up. The 3-day curriculum for groups of 6 residents comprises a combination of standardized patient encounters, didactic sessions, and hands-on training. The curriculum is underpinned by a summative simulation "pathway" repeated on days 1 and 3. The "pathway" is a series of simulated preoperative, intraoperative, and postoperative encounters in following up a single patient through a disease process. The resident sees a standardized patient in the clinic presenting with distal gastric cancer and then enters an operating room to perform a gastrojejunostomy on a porcine tissue model. Finally, the resident engages in a simulated postoperative visit. All encounters are rated by faculty members and the residents themselves, using standardized assessment forms endorsed by the American Board of Surgery. A total of 18 first-year residents underwent this curriculum. Faculty ratings of overall operative performance significantly improved following the 3-day module. Ratings of preoperative and postoperative performance were not significantly changed in 3 days. Resident self-ratings significantly improved for all encounters assessed, as did reported confidence in meeting the defined learning objectives. Conventional surgical simulation training focuses on technical skills in isolation. Our novel "pathway" curriculum targets an important gap in training methodologies by placing both technical and nontechnical skills in their clinical context as part of managing a surgical patient. Results indicate consistent improvements in assessments of performance as well as confidence and support its continued usage to educate surgery residents
A Companion Model Approach to Modelling and Simulation of Industrial Processes
International Nuclear Information System (INIS)
Juslin, K.
2005-09-01
Modelling and simulation provides for huge possibilities if broadly taken up by engineers as a working method. However, when considering the launching of modelling and simulation tools in an engineering design project, they shall be easy to learn and use. Then, there is no time to write equations, to consult suppliers' experts, or to manually transfer data from one tool to another. The answer seems to be in the integration of easy to use and dependable simulation software with engineering tools. Accordingly, the modelling and simulation software shall accept as input such structured design information on industrial unit processes and their connections, as provided for by e.g. CAD software and product databases. The software technology, including required specification and communication standards, is already available. Internet based service repositories make it possible for equipment manufacturers to supply 'extended products', including such design data as needed by engineers engaged in process and automation integration. There is a market niche evolving for simulation service centres, operating in co-operation with project consultants, equipment manufacturers, process integrators, automation designers, plant operating personnel, and maintenance centres. The companion model approach for specification and solution of process simulation models, as presented herein, is developed from the above premises. The focus is on how to tackle real world processes, which from the modelling point of view are heterogeneous, dynamic, very stiff, very nonlinear and only piece vice continuous, without extensive manual interventions of human experts. An additional challenge, to solve the arising equations fast and reliable, is dealt with, as well. (orig.)
Directory of Open Access Journals (Sweden)
Song-Hun Chong
2017-10-01
Full Text Available This paper analyzes the long-term response of unlined energy storage located at shallow depth to improve the distance between a wind farm and storage. The numerical approach follows the hybrid scheme that combined a mechanical constitutive model to extract stress and strains at the first cycle and polynomial-type strain accumulation functions to track the progressive plastic deformation. In particular, the strain function includes the fundamental features that requires simulating the long-term response of geomaterials: volumetric strain (terminal void ratio and shear strain (shakedown and ratcheting, the strain accumulation rate, and stress obliquity. The model is tested with a triaxial strain boundary condition under different stress obliquities. The unlined storage subjected to cyclic internal stress is simulated with different storage geometries and stress amplitudes that play a crucial role in estimating the long-term mechanical stability of underground storage. The simulations present the evolution of ground surface, yet their incremental rate approaches towards a terminal void ratio. With regular and smooth displacement fields for the large number of cycles, the inflection point is estimated with the previous surface settlement model.
Directory of Open Access Journals (Sweden)
Hamidreza Reihani
2015-01-01
Full Text Available Objective: In this trial, we intend to assess the effect of simulation-based education approach on advanced cardiovascular life support skills among medical students. Methods: Through convenient sampling method, 40 interns of Mashhad University of Medical Sciences in their emergency medicine rotation (from September to December 2012 participated in this study. Advanced Cardiovascular Life Support (ACLS workshops with pretest and post-test exams were performed. Workshops and checklists for pretest and post-test exams were designed according to the latest American Heart Association (AHA guidelines. Results: The total score of the students increased significantly after workshops (24.6 out of 100 to 78.6 out of 100. This demonstrates 53.9% improvement in the skills after the simulation-based education (P< 0.001. Also the mean score of each station had a significant improvement (P< 0.001. Conclusion: Pretests showed that interns had poor performance in practical clinical matters while their scientific knowledge, such as ECG interpretation was acceptable. The overall results of the study highlights that Simulation based-education approach is highly effective in Improving ACLS skills among medical students.
Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni
2018-01-01
In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006
Hybrid spectral CT reconstruction.
Directory of Open Access Journals (Sweden)
Darin P Clark
Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with
Hybrid spectral CT reconstruction
Clark, Darin P.
2017-01-01
Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral
A Green's Function Approach to Simulate DNA Damage by the Indirect Effect
Plante, Ianik; Cicinotta, Francis A.
2013-01-01
The DNA damage is of fundamental importance in the understanding of the effects of ionizing radiation. DNA is damaged by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research, many questions on the DNA damage by ionizing radiation remains. In the recent years, the Green's functions of the diffusion equation (GFDE) have been used extensively in biochemistry [1], notably to simulate biochemical networks in time and space [2]. In our future work on DNA damage, we wish to use an approach based on the GFDE to refine existing models on the indirect effect of ionizing radiation on DNA. To do so, we will use the code RITRACKS [3] developed at the NASA Johnson Space Center to simulate the radiation track structure and calculate the position of radiolytic species after irradiation. We have also recently developed an efficient Monte-Carlo sampling algorithm for the GFDE of reversible reactions with an intermediate state [4], which can be modified and adapted to simulate DNA damage by free radicals. To do so, we will use the known reaction rate constants between radicals (OH, eaq, H,...) and the DNA bases, sugars and phosphates and use the sampling algorithms to simulate the diffusion of free radicals and chemical reactions with DNA. These techniques should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.
Management of Housing and Public Services of a City: the System and Simulation Approach
Directory of Open Access Journals (Sweden)
Bril Mykhailo S.
2017-12-01
Full Text Available The article is dedicated to the development of models for management of housing and communal services of a city on the basis of the system approach and simulation modeling. A review of the existing models of urban systems is carried out, their advantages and disadvantages are shown. With the use of the methods of simulation and scenario modeling, a simulation model for management of housing and communal services of a city has been developed, which makes it possible to predict the dynamics of the main socio-economic indicators of development of a city. On the basis of the model, the forecast indicators were simulated according to various scenarios for distributing financial resources for the renovation and maintenance of housing facilities of a city. The main criterion for effectiveness of the scenarios is the level of housing provision for the population. The models built can be used for making managerial decisions by local government authorities as well as in elaborating programs for the urban social and economic development.
Simulating Controlled Radical Polymerizations with mcPolymer—A Monte Carlo Approach
Directory of Open Access Journals (Sweden)
Georg Drache
2012-07-01
Full Text Available Utilizing model calculations may lead to a better understanding of the complex kinetics of the controlled radical polymerization. We developed a universal simulation tool (mcPolymer, which is based on the widely used Monte Carlo simulation technique. This article focuses on the software architecture of the program, including its data management and optimization approaches. We were able to simulate polymer chains as individual objects, allowing us to gain more detailed microstructural information of the polymeric products. For all given examples of controlled radical polymerization (nitroxide mediated radical polymerization (NMRP homo- and copolymerization, atom transfer radical polymerization (ATRP, reversible addition fragmentation chain transfer polymerization (RAFT, we present detailed performance analyses demonstrating the influence of the system size, concentrations of reactants, and the peculiarities of data. Different possibilities were exemplarily illustrated for finding an adequate balance between precision, memory consumption, and computation time of the simulation. Due to its flexible software architecture, the application of mcPolymer is not limited to the controlled radical polymerization, but can be adjusted in a straightforward manner to further polymerization models.
An Automatic Approach to the Stabilization Condition in a HIx Distillation Simulation
International Nuclear Information System (INIS)
Chang, Ji Woon; Shin, Young Joon; Lee, Ki Young; Kim, Yong Wan; Chang, Jong Hwa; Youn, Cheung
2010-01-01
In the Sulfur-Iodine(SI) thermochemical process to produce nuclear hydrogen, an H 2 O-HI-I 2 ternary mixture solution discharged from the Bunsen reaction is primarily concentrated by electro-electrodialysis. The concentrated solution is distillated in the HIx distillation column to generate a high purity HI vapor. The pure HI vapor is obtained at the top of the HIx distillation column and the diluted HIx solution is discharged at the bottom of the column. In order to simulate the steady-state HIx distillation column, a vapor-liquid equilibrium (VLE) model of the H 2 O-HI-I 2 ternary system is required and the subprogram to calculate VLE concentrations has been already introduced by KAERI research group in 2006. The steady state simulation code for the HIx distillation process was also developed in 2007. However, the intrinsic phenomena of the VLE data such as the steep slope of a T-x-y diagram caused the instability of the simulation calculation. In this paper, a computer program to automatically find a stabilization condition in the steady state simulation of the HIx distillation column is introduced. A graphic user interface (GUI) function to monitor an approach to the stabilization condition was added in this program
The fuel cell model of abiogenesis: a new approach to origin-of-life simulations.
Barge, Laura M; Kee, Terence P; Doloboff, Ivria J; Hampton, Joshua M P; Ismail, Mohammed; Pourkashanian, Mohamed; Zeytounian, John; Baum, Marc M; Moss, John A; Lin, Chung-Kuang; Kidd, Richard D; Kanik, Isik
2014-03-01
In this paper, we discuss how prebiotic geo-electrochemical systems can be modeled as a fuel cell and how laboratory simulations of the origin of life in general can benefit from this systems-led approach. As a specific example, the components of what we have termed the "prebiotic fuel cell" (PFC) that operates at a putative Hadean hydrothermal vent are detailed, and we used electrochemical analysis techniques and proton exchange membrane (PEM) fuel cell components to test the properties of this PFC and other geo-electrochemical systems, the results of which are reported here. The modular nature of fuel cells makes them ideal for creating geo-electrochemical reactors with which to simulate hydrothermal systems on wet rocky planets and characterize the energetic properties of the seafloor/hydrothermal interface. That electrochemical techniques should be applied to simulating the origin of life follows from the recognition of the fuel cell-like properties of prebiotic chemical systems and the earliest metabolisms. Conducting this type of laboratory simulation of the emergence of bioenergetics will not only be informative in the context of the origin of life on Earth but may help in understanding whether life might emerge in similar environments on other worlds.
On the generalization of the hazard rate twisting-based simulation approach
Rached, Nadhir B.
2016-11-17
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. A naive Monte Carlo simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. Most of these methods have thus far been proposed to deal with specific settings under which the RVs belong to particular classes of distributions. In this paper, we propose a generalization of the well-known hazard rate twisting Importance Sampling-based approach that presents the advantage of being logarithmic efficient for arbitrary sums of RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of the proposed method with some existing techniques.
FENICIA: a generic plasma simulation code using a flux-independent field-aligned coordinate approach
International Nuclear Information System (INIS)
Hariri, Farah
2013-01-01
The primary thrust of this work is the development and implementation of a new approach to the problem of field-aligned coordinates in magnetized plasma turbulence simulations called the FCI approach (Flux-Coordinate Independent). The method exploits the elongated nature of micro-instability driven turbulence which typically has perpendicular scales on the order of a few ion gyro-radii, and parallel scales on the order of the machine size. Mathematically speaking, it relies on local transformations that align a suitable coordinate to the magnetic field to allow efficient computation of the parallel derivative. However, it does not rely on flux coordinates, which permits discretizing any given field on a regular grid in the natural coordinates such as (x, y, z) in the cylindrical limit. The new method has a number of advantages over methods constructed starting from flux coordinates, allowing for more flexible coding in a variety of situations including X-point configurations. In light of these findings, a plasma simulation code FENICIA has been developed based on the FCI approach with the ability to tackle a wide class of physical models. The code has been verified on several 3D test models. The accuracy of the approach is tested in particular with respect to the question of spurious radial transport. Tests on 3D models of the drift wave propagation and of the Ion Temperature Gradient (ITG) instability in cylindrical geometry in the linear regime demonstrate again the high quality of the numerical method. Finally, the FCI approach is shown to be able to deal with an X-point configuration such as one with a magnetic island with good convergence and conservation properties. (author) [fr
A simulation-based approach for solving assembly line balancing problem
Wu, Xiaoyu
2017-09-01
Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.
Rached, Nadhir B.
2016-01-06
The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed, boils down to computing the cumulative distribution function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gains (equivalently squared enveloped/ amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. We develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte Carlo simulations.
Directory of Open Access Journals (Sweden)
José Francisco Gómez Aguilar
2012-07-01
Full Text Available Using the fractional calculus approach, we present the Laplace analysis of an equivalent electrical circuit for a multilayered system, which includes distributed elements of the Cole model type. The Bode graphs are obtained from the numerical simulation of the corresponding transfer functions using arbitrary electrical parameters in order to illustrate the methodology. A numerical Laplace transform is used with respect to the simulation of the fractional differential equations. From the results shown in the analysis, we obtain the formula for the equivalent electrical circuit of a simple spectrum, such as that generated by a real sample of blood tissue, and the corresponding Nyquist diagrams. In addition to maintaining consistency in adjusted electrical parameters, the advantage of using fractional differential equations in the study of the impedance spectra is made clear in the analysis used to determine a compact formula for th