Sample records for spectral simulation approach

  1. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier


    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  2. Effect of spectral cross-correlation on multiaxial fatigue damage: simulations using the critical plane approach

    Directory of Open Access Journals (Sweden)

    Andrea Carpinteri


    Full Text Available The present paper aims to discuss a frequency-domain multiaxial fatigue criterion based on the critical plane approach, suitable for fatigue life estimations in the presence of proportional and non-proportional random loading. The criterion consists of the following three steps: definition of the critical plane, Power Spectral Density (PSD evaluation of an equivalent normal stress, and estimation of fatigue damage. Such a frequency-domain criterion has recently been validated by using experimental data available in the literature, related to combined proportional and non-proportional bending and torsion random loading. The comparison with such experimental data has been quite satisfactory. In order to further validate the above criterion, numerical simulations are herein performed by employing a wide group of combined bending and torsion signals. Each of such signals is described by an ergodic, stationary and Gaussian stochastic process, with zero mean value. The spectrum of each signal is assumed to be represented by a PSD function with rectangular shape. Different values of correlation degree, variance and spectral content are examined.

  3. Simulating high-frequency seismograms in complicated media: A spectral approach

    International Nuclear Information System (INIS)

    Orrey, J.L.; Archambeau, C.B.


    The main attraction of using a spectral method instead of a conventional finite difference or finite element technique for full-wavefield forward modeling in elastic media is the increased accuracy of a spectral approximation. While a finite difference method accurate to second order typically requires 8 to 10 computational grid points to resolve the smallest wavelengths on a 1-D grid, a spectral method that approximates the wavefield by trignometric functions theoretically requires only 2 grid points per minimum wavelength and produces no numerical dispersion from the spatial discretization. The resultant savings in computer memory, which is very significant in 2 and 3 dimensions, allows for larger scale and/or higher frequency simulations

  4. Spectral element simulation of ultrafiltration

    DEFF Research Database (Denmark)

    Hansen, M.; Barker, Vincent A.; Hassager, Ole


    A spectral element method for simulating stationary 2-D ultrafiltration is presented. The mathematical model is comprised of the Navier-Stokes equations for the velocity field of the fluid and a transport equation for the concentration of the solute. In addition to the presence of the velocity...... vector in the transport equation, the system is coupled by the dependency of the fluid viscosity on the solute concentration and by a concentration-dependent boundary condition for the Navier-Stokes equations at the membrane surface. The spectral element discretization yields a nonlinear algebraic system....... The performance of the spectral element code when applied to several ultrafiltration problems is reported. (C) 1998 Elsevier Science Ltd. All rights reserved....

  5. Spectral methods in numerical plasma simulation

    International Nuclear Information System (INIS)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.


    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)

  6. Spectral Methods in Numerical Plasma Simulation

    DEFF Research Database (Denmark)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.


    in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed.......An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...

  7. Assessment of the accuracy of snow surface direct beam spectral albedo under a variety of overcast skies derived by a reciprocal approach through radiative transfer simulation. (United States)

    Li, Shusun; Zhou, Xiaobing


    With radiative transfer simulations it is suggested that stable estimates of the highly anisotropic direct beam spectral albedo of snow surface can be derived reciprocally under a variety of overcast skies. An accuracy of +/- 0.008 is achieved over a solar zenith angle range of theta0 snow surface albedo for the polar regions where direct measurement of clear-sky surface albedo is limited to large theta0's only. The enhancement will assist in the validation of snow surface albedo models and improve the representation of polar surface albedo in global circulation models.

  8. Novel Approaches to the Spectral and Colorimetric Color Reproduction (United States)

    Maali Amiri, Morteza

    All the different approaches taken for spectral data acquisition can be narrowed down to two main methods; the first one is using spectrophotometer, spectroradiometer, hyper- and multi- spectral camera through which the spectra can be most probably attained with a high level of accuracy in a direct manner. Nonetheless, the price at which the spectra are acquired is very high. However, there is also a second approached in which the spectra are estimated from the colorimetric information. The second approach, even though it is very cost efficient, is of limited level of accuracy, which could be due to the methods or the dissmiliarity of learning and testing samples used. In this work, through looking upon the spectral estimation in a different way, it is attempted to enhance the accuracy of the spectral estimation procedures which is fulfilled by associating the spectral recovery process with spectral sensitivity variability present in both different human observers and RGB cameras. The work is split into two main sections, namely, theory and practice. In the first section, theory, the main idea of the thesis is examined through simulation, using different observers' color matching functions (CMFs) obtained from Asano's vision model and also different cameras' spectral sensitivities obtained from an open database. The second part of the work is concerned with putting the major idea of the thesis into use and is comprised of three subsections itself. In the first subsection, real cameras and cellphones are used. In the second subsection, using weighted regression, the idea presented in this work, is extended to a series of studies in which spectra are estimated from their corresponding CIEXYZ tristimulus values. In the last subsection, obserevers' colorimetric responses are simulated using color matching. Finally, it is shown that the methods presented in this work have a great potential to even rival multi-spectral cameras, whose equipment could be as expensive as a

  9. Constellation modulation - an approach to increase spectral efficiency. (United States)

    Dash, Soumya Sunder; Pythoud, Frederic; Hillerkuss, David; Baeuerle, Benedikt; Josten, Arne; Leuchtmann, Pascal; Leuthold, Juerg


    Constellation modulation (CM) is introduced as a new degree of freedom to increase the spectral efficiency and to further approach the Shannon limit. Constellation modulation is the art of encoding information not only in the symbols within a constellation but also by encoding information by selecting a constellation from a set of constellations that are switched from time to time. The set of constellations is not limited to sets of partitions from a given constellation but can e.g., be obtained from an existing constellation by applying geometrical transformations such as rotations, translations, scaling, or even more abstract transformations. The architecture of the transmitter and the receiver allows for constellation modulation to be used on top of existing modulations with little penalties on the bit-error ratio (BER) or on the required signal-to-noise ratio (SNR). The spectral bandwidth used by this modulation scheme is identical to the original modulation. Simulations demonstrate a particular advantage of the scheme for low SNR situations. So, for instance, it is demonstrated by simulation that a spectral efficiency increases by up to 33% and 20% can be obtained at a BER of 10 -3 and 2×10 -2 for a regular BPSK modulation format, respectively. Applying constellation modulation, we derive a most power efficient 4D-CM-BPSK modulation format that provides a spectral efficiency of 0.7 bit/s/Hz for an SNR of 0.2 dB at a BER of 2 × 10 -2 .

  10. Spectral Synthesis via Mean Field approach to Independent Component Analysis

    International Nuclear Information System (INIS)

    Hu, Ning; Su, Shan-Shan; Kong, Xu


    We apply a new statistical analysis technique, the Mean Field approach to Independent Component Analysis (MF-ICA) in a Bayseian framework, to galaxy spectral analysis. This algorithm can compress a stellar spectral library into a few Independent Components (ICs), and the galaxy spectrum can be reconstructed by these ICs. Compared to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, the MF-ICA approach offers a large improvement in efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter recovery for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters derived with galaxies from the Sloan Digital Sky Survey. We find that our MF-ICA method can not only fit the observed galaxy spectra efficiently, but can also accurately recover the physical parameters of galaxies. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find it can provide excellent fitting results for low signal-to-noise spectra. (paper)

  11. Fourier spectral simulations for wake fields in conducting cavities

    International Nuclear Information System (INIS)

    Min, M.; Chin, Y.-H.; Fischer, P.F.; Chae, Y.-Chul; Kim, K.-J.


    We investigate Fourier spectral time-domain simulations applied to wake field calculations in two-dimensional cylindrical structures. The scheme involves second-order explicit leap-frogging in time and Fourier spectral approximation in space, which is obtained from simply replacing the spatial differentiation operator of the YEE scheme by the Fourier differentiation operator on nonstaggered grids. This is a first step toward investigating high-order computational techniques with the Fourier spectral method, which is relatively simple to implement.

  12. Simulating performance of solar cells with spectral downshifting layers

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.


    In order to estimate the performance of solar cells with downshifters under realistic irradiation conditions we used spectral distributions as they may be found outdoors. The spectral distributions were generated on a minutely basis by means of the spectrum simulation model SEDES2, using minutely

  13. Spectral element filtering techniques for large eddy simulation with dynamic estimation

    CERN Document Server

    Blackburn, H M


    Spectral element methods have previously been successfully applied to direct numerical simulation of turbulent flows with moderate geometrical complexity and low to moderate Reynolds numbers. A natural extension of application is to large eddy simulation of turbulent flows, although there has been little published work in this area. One of the obstacles to such application is the ability to deal successfully with turbulence modelling in the presence of solid walls in arbitrary locations. An appropriate tool with which to tackle the problem is dynamic estimation of turbulence model parameters, but while this has been successfully applied to simulation of turbulent wall-bounded flows, typically in the context of spectral and finite volume methods, there have been no published applications with spectral element methods. Here, we describe approaches based on element-level spectral filtering, couple these with the dynamic procedure, and apply the techniques to large eddy simulation of a prototype wall-bounded turb...

  14. Order and correlations in genomic DNA sequences. The spectral approach

    International Nuclear Information System (INIS)

    Lobzin, Vasilii V; Chechetkin, Vladimir R


    The structural analysis of genomic DNA sequences is discussed in the framework of the spectral approach, which is sufficiently universal due to the reciprocal correspondence and mutual complementarity of Fourier transform length scales. The spectral characteristics of random sequences of the same nucleotide composition possess the property of self-averaging for relatively short sequences of length M≥100-300. Comparison with the characteristics of random sequences determines the statistical significance of the structural features observed. Apart from traditional applications to the search for hidden periodicities, spectral methods are also efficient in studying mutual correlations in DNA sequences. By combining spectra for structure factors and correlation functions, not only integral correlations can be estimated but also their origin identified. Using the structural spectral entropy approach, the regularity of a sequence can be quantitatively assessed. A brief introduction to the problem is also presented and other major methods of DNA sequence analysis described. (reviews of topical problems)

  15. Sensitive detection of aerosol effect on simulated IASI spectral radiance

    International Nuclear Information System (INIS)

    Quan, X.; Huang, H.-L.; Zhang, L.; Weisz, E.; Cao, X.


    Guided by radiative transfer modeling of the effects of dust (aerosol) on satellite thermal infrared radiance by many different imaging radiometers, in this article, we present the aerosol-effected satellite radiative signal changes in the top of atmosphere (TOA). The simulation of TOA radiance for Infrared Atmospheric Sounding Interferometer (IASI) is performed by using the RTTOV fast radiative transfer model. The model computation is carried out with setting representative geographical atmospheric models and typical default aerosol climatological models under clear sky condition. The radiative differences (in units of equivalent black body brightness temperature differences (BTDs)) between simulated radiances without consideration of the impact of aerosol (Aerosol-free) and with various aerosol models (Aerosol-modified) are calculated for the whole IASI spectrum between 3.62 and 15.5 μm. The comparisons of BTDs are performed through 11 aerosol models in 5 classified atmospheric models. The results show that the Desert aerosol model has the most significant impact on IASI spectral simulated radiances than the other aerosol models (Continental, Urban, Maritime types and so on) in Mid-latitude Summer, contributing to the mineral aerosol components contained. The value of BTDs could reach up to 1 K at peak points. The atmospheric window spectral region between 900 and 1100 cm −1 (9.09–11.11 μm) is concentrated after the investigation for the largest values of aerosol-affected radiance differences. BTDs in IASI spectral region between 645 and 1200 cm −1 occupies the largest oscillation and the major part of the whole spectrum. The IASI highest window peak-points channels (such as 9.4 and 10.2 μm) are obtained finally, which are the most sensitive ones to the simulated IASI radiance. -- Highlights: ► Sensitive study of aerosol effect on simulated IASI spectral radiance is performed. ► The aerosol components have influenced IASI spectral regions

  16. Spectral similarity approach for mapping turbidity of an inland waterbody (United States)

    Garg, Vaibhav; Senthil Kumar, A.; Aggarwal, S. P.; Kumar, Vinay; Dhote, Pankaj R.; Thakur, Praveen K.; Nikam, Bhaskar R.; Sambare, Rohit S.; Siddiqui, Asfa; Muduli, Pradipta R.; Rastogi, Gurdeep


    Turbidity is an important quality parameter of water from its optical property point of view. It varies spatio-temporally over large waterbodies and its well distributed measurement on field is tedious and time consuming. Generally, normalized difference turbidity index (NDTI), or band ratio, or regression analysis between turbidity concentration and band reflectance, approaches have been adapted to retrieve turbidity using multispectral remote sensing data. These techniques usually provide qualitative rather than quantitative estimates of turbidity. However, in the present study, spectral similarity analysis, between the spectral characteristics of spaceborne hyperspectral remote sensing data and spectral library generated on field, was carried out to quantify turbidity in the part of Chilika Lake, Odisha, India. Spatial spectral contextual image analysis, spectral angle mapper (SAM) technique was evaluated for the same. The SAM spectral matching technique has been widely used in geological application (mineral mapping), however, the application of this kind of techniques is limited in water quality studies due to non-availability of reference spectral libraries. A spectral library was generated on field for the different concentrations of turbidity using well calibrated instruments like field spectro-radiometer, turbidity meter and hand held global positioning system. The field spectra were classified into 7 classes of turbidity concentration as 100 NTU for analysis. Analysis reveal that at each location in the lake under consideration, the field spectra matched with the image spectra with SAM score of 0.8 and more. The observed turbidity at each location was also very much falling in the estimated turbidity class range. It was observed that the spectral similarity approach provides more quantitative estimate of turbidity as compared to NDTI.

  17. Pulse Analysis Spectroradiometer System for Measuring the Spectral Distribution of Flash Solar Simulators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Andreas, A. M.; Myers, D. R.


    Flashing artificial light sources are used extensively in photovoltaic module performance testing and plant production lines. There are several means of attempting to measure the spectral distribution of a flash of light; however, many of these approaches generally capture the entire pulse energy. We report here on the design and performance of a system to capture the waveform of flash at individual wavelengths of light. Any period within the flash duration can be selected, over which to integrate the flux intensity at each wavelength. The resulting spectral distribution is compared with the reference spectrum, resulting in a solar simulator classification.

  18. Directional and Spectral Irradiance in Ocean Models: Effects on Simulated Global Phytoplankton, Nutrients, and Primary Production (United States)

    Gregg, Watson W.; Rousseaux, Cecile S.


    The importance of including directional and spectral light in simulations of ocean radiative transfer was investigated using a coupled biogeochemical-circulation-radiative model of the global oceans. The effort focused on phytoplankton abundances, nutrient concentrations and vertically-integrated net primary production. The importance was approached by sequentially removing directional (i.e., direct vs. diffuse) and spectral irradiance and comparing results of the above variables to a fully directionally and spectrally-resolved model. In each case the total irradiance was kept constant; it was only the pathways and spectral nature that were changed. Assuming all irradiance was diffuse had negligible effect on global ocean primary production. Global nitrate and total chlorophyll concentrations declined by about 20% each. The largest changes occurred in the tropics and sub-tropics rather than the high latitudes, where most of the irradiance is already diffuse. Disregarding spectral irradiance had effects that depended upon the choice of attenuation wavelength. The wavelength closest to the spectrally-resolved model, 500 nm, produced lower nitrate (19%) and chlorophyll (8%) and higher primary production (2%) than the spectral model. Phytoplankton relative abundances were very sensitive to the choice of non-spectral wavelength transmittance. The combined effects of neglecting both directional and spectral irradiance exacerbated the differences, despite using attenuation at 500 nm. Global nitrate decreased 33% and chlorophyll decreased 24%. Changes in phytoplankton community structure were considerable, representing a change from chlorophytes to cyanobacteria and coccolithophores. This suggested a shift in community function, from light-limitation to nutrient limitation: lower demands for nutrients from cyanobacteria and coccolithophores favored them over the more nutrient-demanding chlorophytes. Although diatoms have the highest nutrient demands in the model, their

  19. A spectral Poisson solver for kinetic plasma simulation (United States)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf


    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  20. Approach to simulation effectiveness

    CSIR Research Space (South Africa)

    Goncalves, DPD


    Full Text Available , resolution, or uncertainty, depending on the problem at hand. In the case of error, a typical metric might be mean square error. The concepts presented are illustrated using a digital elevation map (DEM) as an example. The resolution of the model.... Engineering is about making decisions. In all the problems listed above, the quality of decisions made based on models and simulations is compromised in some way. Undesirable outcomes follow poor decisions. The concept of decision quality has been considered...

  1. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data (United States)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.


    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  2. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches. (United States)

    Lowet, Eric; Roberts, Mark J; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter


    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information

  3. Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance. (United States)

    Bautista, Pinky A; Yagi, Yukako


    Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.

  4. Blood velocity estimation using ultrasound and spectral iterative adaptive approaches

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt


    This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B......-mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......% of the transmissions, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed methods also allow for more flexible transmission patterns, as well as exhibit fewer spectral artifacts as compared to earlier techniques....

  5. Planar Multipol-Resonance-Probe: A Spectral Kinetic Approach (United States)

    Friedrichs, Michael; Gong, Junbo; Brinkmann, Ralf Peter; Oberrath, Jens; Wilczek, Sebastian


    Measuring plasma parameters, e.g. electron density and electron temperature, is an important procedure to verify the stability and behavior of a plasma process. For this purpose the multipole resonance probe (MRP) represents a satisfying solution to measure the electron density. However the influence of the probe on the plasma through its physical presence makes it unattractive for some processes in industrial application. A solution to combine the benefits of the spherical MRP with the ability to integrate the probe into the plasma reactor is introduced by the planar model of the MRP (pMRP). Introducing the spectral kinetic formalism leads to a reduced simulation-circle compared to particle-in-cell simulations. The model of the pMRP is implemented and first simulation results are presented.

  6. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin


    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  7. A Spectrum Detection Approach for Bearing Fault Signal Based on Spectral Kurtosis

    Directory of Open Access Journals (Sweden)

    Yunfeng Li


    Full Text Available According to the similarity between Morlet wavelet and fault signal and the sensitive characteristics of spectral kurtosis for the impact signal, a new wavelet spectrum detection approach based on spectral kurtosis for bearing fault signal is proposed. This method decreased the band-pass filter range and reduced the wavelet window width significantly. As a consequence, the bearing fault signal was detected adaptively, and time-frequency characteristics of the fault signal can be extracted accurately. The validity of this method was verified by the identifications of simulated shock signal and test bearing fault signal. The method provides a new understanding of wavelet spectrum detection based on spectral kurtosis for rolling element bearing fault signal.

  8. Spectral Subtraction Approach for Interference Reduction of MIMO Channel Wireless Systems

    Directory of Open Access Journals (Sweden)

    Tomohiro Ono


    Full Text Available In this paper, a generalized spectral subtraction approach for reducing additive impulsive noise, narrowband signals, white Gaussian noise and DS-CDMA interferences in MIMO channel DS-CDMA wireless communication systems is investigated. The interference noise reduction or suppression is essential problem in wireless mobile communication systems to improve the quality of communication. The spectrum subtraction scheme is applied to the interference noise reduction problems for noisy MIMO channel systems. The interferences in space and time domain signals can effectively be suppressed by selecting threshold values, and the computational load with the FFT is not large. Further, the fading effects of channel are compensated by spectral modification with the spectral subtraction process. In the simulations, the effectiveness of the proposed methods for the MIMO channel DS-CDMA is shown to compare with the conventional MIMO channel DS-CDMA.

  9. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    International Nuclear Information System (INIS)

    Dai, Qi; Hao, Luoxi; Lin, Yi; Cui, Zhe


    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions

  10. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qi, E-mail: [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Institute for Advanced Study, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China); Hao, Luoxi; Lin, Yi; Cui, Zhe [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China)


    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions.

  11. Investigating the feasibility of classifying breast microcalcifications using photon-counting spectral mammography: A simulation study. (United States)

    Ghammraoui, Bahaa; Glick, Stephen J


    A dual-energy material decomposition method using photon-counting spectral mammography was investigated as a non-invasive diagnostic approach to differentiate between Type I calcifications, consisting of calcium oxalate dihydrate or weddellite compounds that are more often associated with benign lesions, and Type II calcifications containing hydroxyapatite that are predominantly associated with malignant tumors. The study was carried out by numerical simulation to assess the feasibility of the proposed approach. A pencil-beam geometry was modeled, and the total number of x-rays transported through a breast embedded with microcalcifications of different types and sizes were simulated by a one-pixel detector. Material decomposition using two energy bins was then applied to characterize the simulated calcifications into hydroxyapatite and weddellite using maximum-likelihood estimation, taking into account the polychromatic source, and the energy dependent attenuation. Simulation tests were carried out for different dose levels, energy windows and calcification sizes for multiple noise realizations. The results were analyzed using receiver operating characteristic (ROC) analysis. Classification between Type I and Type II calcifications achieved by analyzing a single microcalcification showed moderate accuracy. However, simultaneously analyzing several calcifications within the cluster provided area under the ROC curve of greater than 99% for radiation dose greater than 4.8 mGy mean glandular dose. Simulation results indicated that photon-counting spectral mammography with dual energy material decomposition has the potential to be used as a non-invasive method for discrimination between Type I and Type II microcalcifications that can potentially improve early breast cancer diagnosis and reduce the number of negative breast biopsies. Additional studies using breast specimens and clinical data should be performed to further explore the feasibility of this approach

  12. Chebyshev matrix product state approach for spectral functions (United States)

    Holzner, Andreas; Weichselbaum, Andreas; McCulloch, Ian P.; Schollwöck, Ulrich; von Delft, Jan


    We show that recursively generated Chebyshev expansions offer numerically efficient representations for calculating zero-temperature spectral functions of one-dimensional lattice models using matrix product state (MPS) methods. The main features of this Chebyshev matrix product state (CheMPS) approach are as follows: (i) it achieves uniform resolution over the spectral function’s entire spectral width; (ii) it can exploit the fact that the latter can be much smaller than the model’s many-body bandwidth; (iii) it offers a well-controlled broadening scheme that allows finite-size effects to be either resolved or smeared out, as desired; (iv) it is based on using MPS tools to recursively calculate a succession of Chebyshev vectors |tn>, (v) the entanglement entropies of which were found to remain bounded with increasing recursion order n for all cases analyzed here; and (vi) it distributes the total entanglement entropy that accumulates with increasing n over the set of Chebyshev vectors |tn>, which need not be combined into a single vector. In this way, the growth in entanglement entropy that usually limits density matrix renormalization group (DMRG) approaches is packaged into conveniently manageable units. We present zero-temperature CheMPS results for the structure factor of spin-(1)/(2) antiferromagnetic Heisenberg chains and perform a detailed finite-size analysis. Making comparisons to three benchmark methods, we find that CheMPS (a) yields results comparable in quality to those of correction-vector DMRG, at dramatically reduced numerical cost; (b) agrees well with Bethe ansatz results for an infinite system, within the limitations expected for numerics on finite systems; and (c) can also be applied in the time domain, where it has potential to serve as a viable alternative to time-dependent DMRG (in particular, at finite temperatures). Finally, we present a detailed error analysis of CheMPS for the case of the noninteracting resonant level model.

  13. Local and Global Gestalt Laws: A Neurally Based Spectral Approach. (United States)

    Favali, Marta; Citti, Giovanna; Sarti, Alessandro


    This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels. This analysis performs grouping of local features and individuates perceptual units with the highest salience. Numerical simulations are performed, and results are obtained by applying the technique to a number of stimuli.

  14. Spectral mismatch and solar simulator quality factor in advanced LED solar simulators (United States)

    Scherff, Maximilian L. D.; Nutter, Jason; Fuss-Kailuweit, Peter; Suthues, Jörn; Brammer, Torsten


    Solar cell simulators based on light emitting diodes (LED) have the potential to achieve a large potential market share in the next years. As advantages they can provide a short and long time stable spectrum, which fits very well to the global AM1.5g reference spectrum. This guarantees correct measurements during the flashes and throughout the light engines’ life span, respectively. Furthermore, a calibration with a solar cell type of different spectral response (SR) as well as the production of solar cells with varying SR in between two calibrations does not affect the correctness of the measurement result. A high quality 21 channel LED solar cell spectrum is compared to former study comprising a standard modified xenon spectrum light source. It is shown, that the spectrum of the 21-channel-LED light source performs best for all examined cases.

  15. Calculation of isotope selective excitation of uranium isotopes using spectral simulation method

    International Nuclear Information System (INIS)

    Al-Hassanieh, O.


    Isotope ratio enhancement factor and isotope selectivity of 235 U in five excitation schemes (I: 0→10069 cm - 1 →IP, II: 0 →10081 cm - 1 →IP, III: 0 →25349 cm - 1→ IP, IV: 0→28650 cm - 1 →IP, V: 0→16900 cm - 1 →34659 cm - 1 →IP), were computed by a spectral simulation approach. The effect of laser bandwidth and Doppler width on the isotope ratio enhancement factor and isotope selectivity of 235 U has been studied. The photoionization scheme V gives the highest isotope ratio enhancement factor. The main factors which effect the separation possibility are the isotope shift and the relative intensity of the transitions between hyperfine levels. The isotope ratio enhancement factor decreases exponentially by increasing the Doppler width and the laser bandwidth, where the effect of Doppler width is much greater than the effect of the laser bandwidth. (author)

  16. Simulation of Mixed-Phase Convective Clouds: A Comparison of Spectral and Parameterized Microphysics (United States)

    Seifert, A.; Khain, A.; Pokrovsky, A.


    The simulation of clouds and precipitation is one of the most complex problems in atmospheric modeling. The microphysics of clouds has to deal with a large variety of hydrometeor types and a multitude of complicated physical processes like nukleation, condensation, freezing, melting, collection and breakup of particles. Due to the lack of reliable in-situ observations many of the processes are still not well understood. Nevertheless a cloud resolving model (CRM) has to include these processes in some way. All CRMs can be separated into two groups, according to the microphysical representation used. Cloud models of the first kind utilize the so-called bulk parameterization of cloud microphysics. This concept has been introduced by Kessler (1969) and has been improved and extended in the field of mesoscale modeling. The state-of-the-art bulk schemes include several particle types like cloud droplets, raindrops, ice crystals, snow and graupel which are represented by mass contents and for some of them also by the number concentrations. Within a bulk microphysical model all relevant processes have to be parameterized in terms of these model variables. CRMs of the second kind are based on the spectral formulation of cloud microphysics. For each particle type taken into account the size distribution function is represented by a number of discrete size bins with its corresponding budget equation. To achieve satisfactory numerical results at least 30 bins are necessary for each particle type. This approach has the clear advantage of being a more general representation of the relevant physical processes and the different physical properties of particles of different sizes. A spectral model is able to include detailed descriptions of collisional and condensational growth and activation/nucleation of particles. But this approach suffers from the large computational effort necessary, especially in threedimensional models. We present a comparison between a cloud model with

  17. Validation and application of an high-order spectral difference method for flow induced noise simulation

    KAUST Repository

    Parsani, Matteo


    The main goal of this paper is to develop an efficient numerical algorithm to compute the radiated far field noise provided by an unsteady flow field from bodies in arbitrary motion. The method computes a turbulent flow field in the near fields using a high-order spectral difference method coupled with large-eddy simulation approach. The unsteady equations are solved by advancing in time using a second-order backward difference formulae scheme. The nonlinear algebraic system arising from the time discretization is solved with the nonlinear lowerupper symmetric GaussSeidel algorithm. In the second step, the method calculates the far field sound pressure based on the acoustic source information provided by the first step simulation. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases. The test cases used are: a laminar flow over a two-dimensional (2D) open cavity at Re = 1.5 × 10 3 and M = 0.15 and a laminar flow past a 2D square cylinder at Re = 200 and M = 0.5. In order to show the application of the numerical method in industrial cases and to assess its capability for sound field simulation, a three-dimensional turbulent flow in a muffler at Re = 4.665 × 10 4 and M = 0.05 has been chosen as a third test case. The flow results show good agreement with numerical and experimental reference solutions. Comparison of the computed noise results with those of reference solutions also shows that the numerical approach predicts noise accurately. © 2011 IMACS.

  18. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    International Nuclear Information System (INIS)

    Tang, Hong; Lin, Jian-Zhong


    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data. -- Highlights: ► Improved ADA is presented for calculating the extinction efficiency of spheroids. ► Selection principle about spectral extinction data is developed based on PCA. ► Improved Tikhonov iteration method is proposed to retrieve the spheroid PSD.

  19. Spectral Element Method for the Simulation of Unsteady Compressible Flows (United States)

    Diosady, Laslo Tibor; Murman, Scott M.


    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  20. Numerical Simulations of Kinetic Alfvén Waves to Study Spectral ...

    Indian Academy of Sciences (India)

    Numerical Simulations of Kinetic Alfvén Waves to Study Spectral. Index in Solar Wind Turbulence and Particle Heating. R. P. Sharma. ∗. & H. D. Singh. Center for Energy Studies, Indian Institute of Technology, Delhi 110 016, India. ∗ e-mail: Abstract. We present numerical simulations of the ...

  1. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent


    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  2. Modeling and Halftoning for Multichannel Printers: A Spectral Approach


    Slavuj, Radovan


    Printing has been has been the major communication medium for many centuries. In the last twenty years, multichannel printing has brought new opportunities and challenges. Beside of extended colour gamut of the multichannel printer, the opportunity was presented to use a multichannel printer for ‘spectral printing’. The aim of spectral printing is typically the same as for colour printing; that is, to match input signal with printing specific ink combinations. In order to control printers so ...

  3. Spectral unmixing of urban land cover using a generic library approach (United States)

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben


    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  4. Collisionless spectral-kinetic Simulation of the Multipole Resonance Probe (United States)

    Dobrygin, Wladislaw; Szeremley, Daniel; Schilling, Christian; Oberrath, Jens; Eremin, Denis; Mussenbrock, Thomas; Brinkmann, Ralf Peter


    Plasma resonance spectroscopy is a well established plasma diagnostic method realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development, which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In the simulation the potential is separeted in an inner and a vacuum potential. The inner potential is influenced by the charged partilces and is calculated by a specialized Poisson solver. The vacuum potential fulfills Laplace's equetion and consists of the applied voltage of the probe as boundary condition. Both potentials are expanded in spherical harmonics. For a practical particle pusher implementation, the expansion must be appropriately truncated. Compared to a PIC simulation a grid is unnecessary to calculate the force on the particles. This work purpose is a collisionless kinetic simulation, which can be used to investigate kinetic effects on the resonance behavior of the MRP.[4pt] [1] M. Lapke et al., Appl. Phys. Lett. 93, 2008, 051502.

  5. Accurate, practical simulation of satellite infrared radiometer spectral data

    International Nuclear Information System (INIS)

    Sullivan, T.J.


    This study's purpose is to determine whether a relatively simple random band model formulation of atmospheric radiation transfer in the infrared region can provide valid simulations of narrow interval satellite-borne infrared sounder system data. Detailed ozonesondes provide the pertinent atmospheric information and sets of calibrated satellite measurements provide the validation. High resolution line-by-line model calculations are included to complete the evaluation

  6. New approach to magnetohydrodynamics spectral theory of stationary plasma flows

    NARCIS (Netherlands)

    Goedbloed, J. P.


    While the basic equations of MHD spectral theory date back to 1958 for static plasmas (Bernstein et al 1958 Proc. R. Soc. A 244 17) and to 1960 for stationary plasma flows (Frieman and Rotenberg 1960 Rev. Mod. Phys. 32 898), progress on the latter subject has been slow since it suffers from lack of

  7. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.


    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  8. Solution of electromagnetic scattering and radiation problems using a spectral domain approach - A review (United States)

    Mittra, R.; Ko, W. L.; Rahmat-Samii, Y.


    This paper presents a brief review of some recent developments on the use of the spectral-domain approach for deriving high-frequency solutions to electromagnetics scattering and radiation problems. The spectral approach is not only useful for interpreting the well-known Keller formulas based on the geometrical theory of diffraction (GTD), it can also be employed for verifying the accuracy of GTD and other asymptotic solutions and systematically improving the results when such improvements are needed. The problem of plane wave diffraction by a finite screen or a strip is presented as an example of the application of the spectral-domain approach.

  9. Color film spectral properties test experiment for target simulation (United States)

    Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji


    In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.

  10. A high-order 3D spectral difference solver for simulating flows about rotating geometries (United States)

    Zhang, Bin; Liang, Chunlei


    Fluid flows around rotating geometries are ubiquitous. For example, a spinning ping pong ball can quickly change its trajectory in an air flow; a marine propeller can provide enormous amount of thrust to a ship. It has been a long-time challenge to accurately simulate these flows. In this work, we present a high-order and efficient 3D flow solver based on unstructured spectral difference (SD) method and a novel sliding-mesh method. In the SD method, solution and fluxes are reconstructed using tensor products of 1D polynomials and the equations are solved in differential-form, which leads to high-order accuracy and high efficiency. In the sliding-mesh method, a computational domain is decomposed into non-overlapping subdomains. Each subdomain can enclose a geometry and can rotate relative to its neighbor, resulting in nonconforming sliding interfaces. A curved dynamic mortar approach is designed for communication on these interfaces. In this approach, solutions and fluxes are projected from cell faces to mortars to compute common values which are then projected back to ensures continuity and conservation. Through theoretical analysis and numerical tests, it is shown that this solver is conservative, free-stream preservative, and high-order accurate in both space and time.

  11. Effective approach to spectroscopy and spectral analysis techniques using Matlab (United States)

    Li, Xiang; Lv, Yong


    With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching

  12. Regional Spectral Model simulations of the summertime regional climate over Taiwan and adjacent areas (United States)

    Ching-Teng Lee; Ming-Chin Wu; Shyh-Chin Chen


    The National Centers for Environmental Prediction (NCEP) regional spectral model (RSM) version 97 was used to investigate the regional summertime climate over Taiwan and adjacent areas for June-July-August of 1990 through 2000. The simulated sea-level-pressure and wind fields of RSM1 with 50-km grid space are similar to the reanalysis, but the strength of the...

  13. Impacts of spectral nudging on the simulation of present-day rainfall patterns over southern Africa

    CSIR Research Space (South Africa)

    Muthige, Mavhungu S


    Full Text Available on the simulation rainfall patterns in Southern Africa. We use the Conformal-Cubic Atmospheric Model (CCAM) as RCM to downscale ERA-interim reanalysis data to a resolution of 50 km in the horizontal over the globe. A scale-selective filter (spectral nudging...

  14. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach (United States)

    Paul, Subir; Nagesh Kumar, D.


    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  15. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction (United States)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang


    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  16. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    International Nuclear Information System (INIS)

    Zhang, Yan; Tang, Baoping; Chen, Rengxiang; Liu, Ziran


    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  17. Construction of Spectral Discoloration Model for Red Lead Pigment by Aging Test and Simulating Degradation Experiment

    Directory of Open Access Journals (Sweden)

    Jinxing Liang


    Full Text Available The construction of spectral discoloration model, based on aging test and simulating degradation experiment, was proposed to detect the aging degree of red lead pigment in ancient murals and to reproduce the spectral data supporting digital restoration of the ancient murals. The degradation process of red lead pigment under the aging test conditions was revealed by X-ray diffraction, scanning electron microscopy, and spectrophotometer. The simulating degradation experiment was carried out by proportionally mixing red lead and lead dioxide with referring to the results of aging test. The experimental result indicated that the pure red lead was gradually turned into black lead dioxide, and the amount of tiny particles of the aging sample increased faced with aging process. Both the chroma and lightness of red lead pigment decreased with discoloration, and its hue essentially remains unchanged. In addition, the spectral reflectance curves of the aging samples almost started rising at about 550 nm with the inflection moving slightly from about 570 nm to 550 nm. The spectral reflectance of samples in long- and in short-wavelength regions was fitted well with the logarithmic and linear function. The spectral discoloration model was established, and the real aging red lead pigment in Dunhuang murals was measured and verified the effectiveness of the model.

  18. Spectrally-balanced chromatic approach-lighting system (United States)

    Chase, W. D.


    Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.

  19. A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas

    International Nuclear Information System (INIS)

    Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.


    Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods

  20. A sparse-mode spectral method for the simulation of turbulent flows

    International Nuclear Information System (INIS)

    Meneguzzi, M.; Politano, H.; Pouquet, A.; Zolver, M.


    We propose a new algorithm belonging to the family of the sparsemode spectral method to simulate turbulent flows. In this method the number of Fourier modes k increases with k more slowly than k D-1 in dimension D, while retaining the advantage of the fast Fourier transform. Examples of applications of the algorithm are given for the one-dimensional Burger's equation and two-dimensional incompressible MHD flows

  1. Suppressing sampling noise in linear and two-dimensional spectral simulations (United States)

    Kruiger, Johannes F.; van der Vegte, Cornelis P.; Jansen, Thomas L. C.


    We examine the problem of sampling noise encountered in time-domain simulations of linear and two-dimensional spectroscopies. A new adaptive apodization scheme based on physical arguments is devised for suppressing the noise in order to allow reducing the number of used disorder realisations, but introducing only a minimum of spectral aberrations and thus allowing a potential speed-up of these types of simulations. First, the method is demonstrated on an artificial dimer system, where the effect on slope analysis, typically used to study spectral dynamics, is analysed. It is, furthermore, tested on the simulated two-dimensional infrared spectra in the amide I region of the protein lysozyme. The cross polarisation component is investigated, particularly sensitive to sampling noise, because it relies on cancelling of the dominant diagonal spectral contributions. In all these cases, the adaptive apodization scheme is found to give more accurate results than the commonly used lifetime apodization scheme and in most cases better than the gaussian apodization scheme.

  2. Spectral Bio-indicator Simulations for Tracking Photosynthetic Activities in a Corn Field (United States)

    Cheng, Yen-Ben; Middleton, Elizabeth M.; Huemmrich, K. Fred; Zhang, Qingyuan; Corp, Lawrence; Campbell, Petya; Kustas, William


    Accurate assessment of vegetation canopy optical properties plays a critical role in monitoring natural and managed ecosystems under environmental changes. In this context, radiative transfer (RT) models simulating vegetation canopy reflectance have been demonstrated to be a powerful tool for understanding and estimating spectral bio-indicators. In this study, two narrow band spectroradiometers were utilized to acquire observations over corn canopies for two summers. These in situ spectral data were then used to validate a two-layer Markov chain-based canopy reflectance model for simulating the Photochemical Reflectance Index (PRI), which has been widely used in recent vegetation photosynthetic light use efficiency (LUE) studies. The in situ PRI derived from narrow band hyperspectral reflectance exhibited clear responses to: 1) viewing geometry which affects the asset of light environment; and 2) seasonal variation corresponding to the growth stage. The RT model (ACRM) successfully simulated the responses to the variable viewing geometry. The best simulations were obtained when the model was set to run in the two layer mode using the sunlit leaves as the upper layer and shaded leaves as the lower layer. Simulated PRI values yielded much better correlations to in situ observations when the cornfield was dominated by green foliage during the early growth, vegetative and reproductive stages (r = 0.78 to 0.86) than in the later senescent stage (r = 0.65). Further sensitivity analyses were conducted to show the important influences of leaf area index (LAI) and the sunlit/shaded ratio on PRI observations.

  3. Multi-tissue partial volume quantification in multi-contrast MRI using an optimised spectral unmixing approach. (United States)

    Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme


    Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  5. A spatial-spectral approach for deriving high signal quality eigenvectors for remote sensing image transformations

    DEFF Research Database (Denmark)

    Rogge, Derek; Bachmann, Martin; Rivard, Benoit


    -line surveys, or temporal data sets as computational burden becomes significant. In this paper we present a spatial-spectral approach to deriving high signal quality eigenvectors for image transformations which possess an inherently ability to reduce the effects of noise. The approach applies a spatial...... and spectral subsampling to the data, which is accomplished by deriving a limited set of eigenvectors for spatially contiguous subsets. These subset eigenvectors are compiled together to form a new noise reduced data set, which is subsequently used to derive a set of global orthogonal eigenvectors. Data from...

  6. Simulating charge transport to understand the spectral response of Swept Charge Devices (United States)

    Athiray, P. S.; Sreekumar, P.; Narendranath, S.; Gow, J. P. D.


    Context. Swept Charge Devices (SCD) are novel X-ray detectors optimized for improved spectral performance without any demand for active cooling. The Chandrayaan-1 X-ray Spectrometer (C1XS) experiment onboard the Chandrayaan-1 spacecraft used an array of SCDs to map the global surface elemental abundances on the Moon using the X-ray fluorescence (XRF) technique. The successful demonstration of SCDs in C1XS spurred an enhanced version of the spectrometer on Chandrayaan-2 using the next-generation SCD sensors. Aims: The objective of this paper is to demonstrate validation of a physical model developed to simulate X-ray photon interaction and charge transportation in a SCD. The model helps to understand and identify the origin of individual components that collectively contribute to the energy-dependent spectral response of the SCD. Furthermore, the model provides completeness to various calibration tasks, such as generating spectral matrices (RMFs - redistribution matrix files), estimating efficiency, optimizing event selection logic, and maximizing event recovery to improve photon-collection efficiency in SCDs. Methods: Charge generation and transportation in the SCD at different layers related to channel stops, field zones, and field-free zones due to photon interaction were computed using standard drift and diffusion equations. Charge collected in the buried channel due to photon interaction in different volumes of the detector was computed by assuming a Gaussian radial profile of the charge cloud. The collected charge was processed further to simulate both diagonal clocking read-out, which is a novel design exclusive for SCDs, and event selection logic to construct the energy spectrum. Results: We compare simulation results of the SCD CCD54 with measurements obtained during the ground calibration of C1XS and clearly demonstrate that our model reproduces all the major spectral features seen in calibration data. We also describe our understanding of interactions at

  7. Spectral optical layer properties of cirrus from collocated airborne measurements and simulations

    Directory of Open Access Journals (Sweden)

    F. Finger


    Full Text Available Spectral upward and downward solar irradiances from vertically collocated measurements above and below a cirrus layer are used to derive cirrus optical layer properties such as spectral transmissivity, absorptivity, reflectivity, and cloud top albedo. The radiation measurements are complemented by in situ cirrus crystal size distribution measurements and radiative transfer simulations based on the microphysical data. The close collocation of the radiative and microphysical measurements, above, beneath, and inside the cirrus, is accomplished by using a research aircraft (Learjet 35A in tandem with the towed sensor platform AIRTOSS (AIRcraft TOwed Sensor Shuttle. AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected from two field campaigns over the North Sea and the Baltic Sea in spring and late summer 2013. One measurement flight over the North Sea proved to be exemplary, and as such the results are used to illustrate the benefits of collocated sampling. The radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus spectral optical layer properties. Furthermore, the radiative effects of low-level, liquid water (warm clouds as frequently observed beneath the cirrus are evaluated. They may cause changes in the radiative forcing of the cirrus by a factor of 2. When low-level clouds below the cirrus are not taken into account, the radiative cooling effect (caused by reflection of solar radiation due to the cirrus in the solar (shortwave spectral range is significantly overestimated.

  8. A general spectral method for the numerical simulation of one-dimensional interacting fermions (United States)

    Clason, Christian; von Winckel, Gregory


    This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical

  9. Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems (United States)

    Zuchowski, Loïc; Brun, Michael; De Martin, Florent


    The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.

  10. An integrated approach to fingerprint indexing using spectral clustering based on minutiae points

    CSIR Research Space (South Africa)

    Mngenge, NA


    Full Text Available and Information Conference 2015 July 28-30, 2015 | London, UK An Integrated Approach to Fingerprint Indexing Using Spectral Clustering Based on Minutiae Points 1Ntethelelo A. Mngenge Linda and Mthembu 2Fulufhelo V. Nelwamondo and Cynthia H. Ngejane 1School...

  11. Monte Carlo Spectral Integration: a Consistent Approximation for Radiative Transfer in Large Eddy Simulations

    Directory of Open Access Journals (Sweden)

    Robert Pincus


    Full Text Available Large-eddy simulation (LES refers to a class of calculations in which the large energy-rich eddies are simulated directly and are insensitive to errors in the modeling of sub-grid scale processes. Flows represented by LES are often driven by radiative heating and therefore require the calculation of radiative transfer along with the fluid-dynamical simulation. Current methods for detailed radiation calculations, even those using simple one-dimensional radiative transfer, are far too expensive for routine use, while popular shortcuts are either of limited applicability or run the risk of introducing errors on time and space scales that might affect the overall simulation. A new approximate method is described that relies on Monte Carlo sampling of the spectral integration in the heating rate calculation and is applicable to any problem. The error introduced when using this method is substantial for individual samples (single columns at single times but is uncorrelated in time and space and so does not bias the statistics of scales that are well resolved by the LES. The method is evaluated through simulation of two test problems; these behave as expected. A scaling analysis shows that the errors introduced by the method diminish as flow features become well resolved. Errors introduced by the approximation increase with decreasing spatial scale but the spurious energy introduced by the approximation is less than the energy expected in the unperturbed flow, i.e. the energy associated with the spectral cascade from the large scale, even on the grid scale.

  12. Multiscale simulation approach for battery production systems

    CERN Document Server

    Schönemann, Malte


    Addressing the challenge of improving battery quality while reducing high costs and environmental impacts of the production, this book presents a multiscale simulation approach for battery production systems along with a software environment and an application procedure. Battery systems are among the most important technologies of the 21st century since they are enablers for the market success of electric vehicles and stationary energy storage solutions. However, the performance of batteries so far has limited possible applications. Addressing this challenge requires an interdisciplinary understanding of dynamic cause-effect relationships between processes, equipment, materials, and environmental conditions. The approach in this book supports the integrated evaluation of improvement measures and is usable for different planning horizons. It is applied to an exemplary battery cell production and module assembly in order to demonstrate the effectiveness and potential benefits of the simulation.

  13. Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method

    International Nuclear Information System (INIS)

    Zhang Xu; Tan Duowang


    A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)

  14. Direct Numerical Simulation of the Rayleigh−Taylor Instability with the Spectral Element Method

    International Nuclear Information System (INIS)

    Xu, Zhang; Duo-Wang, Tan


    A novel method is proposed to simulate Rayleigh−Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier–Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh−Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh–Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh−Taylor instabilities of turbulent flows. (fundamental areas of phenomenology (including applications))

  15. Simulation of photosynthetically active radiation distribution in algal photobioreactors using a multidimensional spectral radiation model. (United States)

    Kong, Bo; Vigil, R Dennis


    A numerical method for simulating the spectral light distribution in algal photobioreactors is developed by adapting the discrete ordinate method for solving the radiative transport equation. The technique, which was developed for two and three spatial dimensions, provides a detailed accounting for light absorption and scattering by algae in the culture medium. In particular, the optical properties of the algal cells and the radiative properties of the turbid culture medium were calculated using a method based on Mie theory and that makes use of information concerning algal pigmentation, shape, and size distribution. The model was validated using a small cylindrical bioreactor, and subsequently simulations were carried out for an annular photobioreactor configuration. It is shown that even in this relatively simple geometry, nontrivial photon flux distributions arise that cannot be predicted by one-dimensional models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Detailed spectral simulations in support of PBFA-Z dynamic hohlraum Z-pinch experiments

    International Nuclear Information System (INIS)

    MacFarlane, J.J.; Wang, P.; Derzon, M.S.; Haill, A.; Nash, T.J.; Peterson, D.L.


    In PBFA-Z dynamic hohlraum Z-pinch experiments, 16--18 MA of current is delivered to a load comprises of a tungsten wire array surrounding a low-density cylindrical CH foam. The magnetic field accelerates the W plasma radially inward at velocities ∼ 40--60 cm/micros. The W plasma impacts into the foam, generating a high T R radiation field which diffuses into the foam. The authors are investigating several types of spectral diagnostics which can be used to characterize the time-dependent conditions in the foam. In addition, they are examining the potential ramifications of axial jetting on the interpretation of axial x-ray diagnostics. In the analysis, results from 2-D radiation-magnetohydrodynamics simulations are post-processed using a hybrid spectral analysis code in which low-Z material is treated using a detailed collisional-radiative atomic model, while high-Z material is modeled using LTE UTA (unresolved transition array) opacities. They will present results from recent simulations and discuss ramifications for x-ray diagnostics

  17. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.


    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  18. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image (United States)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi


    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  19. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads. (United States)

    Brigham, John C; Aquino, Wilkins; Aguilo, Miguel A; Diamessis, Peter J


    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number.

  20. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads (United States)

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.


    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  1. Spectral Approach to Anderson Localization in a Disordered 2D Complex Plasma Crystal (United States)

    Kostadinova, Eva; Liaw, Constanze; Matthews, Lorin; Busse, Kyle; Hyde, Truell


    In condensed matter, a crystal without impurities acts like a perfect conductor for a travelling wave-particle. As the level of impurities reaches a critical value, the resistance in the crystal increases and the travelling wave-particle experiences a transition from an extended to a localized state, which is called Anderson localization. Due to its wide applicability, the subject of Anderson localization has grown into a rich field in both physics and mathematics. Here, we introduce the mathematics behind the spectral approach to localization in infinite disordered systems and provide physical interpretation in context of both quantum mechanics and classical physics. We argue that the spectral analysis is an important contribution to localization theory since it avoids issues related to the use of boundary conditions, scaling, and perturbation. To test accuracy and applicability we apply the spectral approach to the case of a 2D hexagonal complex plasma crystal used as a macroscopic analog for a graphene-like medium. Complex plasma crystals exhibit characteristic distance and time scales, which are easily observable by video microscopy. As such, these strongly coupled many-particle systems are ideal for the study of localization phenomena. The goal of this research is to both expand the spectral method into the classical regime and show the potential of complex plasma as a macroscopic tool for localization experiments. NSF / DOE funding is gratefully acknowledged - PHY1414523 & PHY1262031.

  2. Leaf nitrogen spectral reflectance model of winter wheat (Triticum aestivum) based on PROSPECT: simulation and inversion (United States)

    Yang, Guijun; Zhao, Chunjiang; Pu, Ruiliang; Feng, Haikuan; Li, Zhenhai; Li, Heli; Sun, Chenhong


    Through its association with proteins and plant pigments, leaf nitrogen (N) plays an important regulatory role in photosynthesis, leaf respiration, and net primary production. However, the traditional methods of measurement leaf N are rooted in sample-based spectroscopy in laboratory. There is a big challenge of deriving leaf N from the nondestructive field-measured leaf spectra. In this study, the original PROSPECT model was extended by replacing the absorption coefficient of chlorophyll in the original PROSPECT model with an equivalent N absorption coefficient to develop a nitrogen-based PROSPECT model (N-PROSPECT). N-PROSPECT was evaluated by comparing the model-simulated reflectance values with the measured leaf reflectance values. The validated results show that the correlation coefficient (R) was 0.98 for the wavelengths of 400 to 2500 nm. Finally, N-PROSPECT was used to simulate leaf reflectance using different combinations of input parameters, and partial least squares regression (PLSR) was used to establish the relationship between the N-PROSPECT simulated reflectance and the corresponding leaf nitrogen density (LND). The inverse of the PLSR-based N-PROSPECT model was used to retrieve LND from the measured reflectance with a relatively high accuracy (R2=0.77, RMSE=22.15 μg cm-2). This result demonstrates that the N-PROSPECT model established in this study can accurately simulate nitrogen spectral contributions and retrieve LND.

  3. A variational approach to nucleation simulation. (United States)

    Piaggi, Pablo M; Valsson, Omar; Parrinello, Michele


    We study by computer simulation the nucleation of a supersaturated Lennard-Jones vapor into the liquid phase. The large free energy barriers to transition make the time scale of this process impossible to study by ordinary molecular dynamics simulations. Therefore we use a recently developed enhanced sampling method [Valsson and Parrinello, Phys. Rev. Lett.113, 090601 (2014)] based on the variational determination of a bias potential. We differ from previous applications of this method in that the bias is constructed on the basis of the physical model provided by the classical theory of nucleation. We examine the technical problems associated with this approach. Our results are very satisfactory and will pave the way for calculating the nucleation rates in many systems.

  4. Cloud phase identification of Arctic boundary-layer clouds from airborne spectral reflection measurements: test of three approaches

    Directory of Open Access Journals (Sweden)

    A. Ehrlich


    Full Text Available Arctic boundary-layer clouds were investigated with remote sensing and in situ instruments during the Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR campaign in March and April 2007. The clouds formed in a cold air outbreak over the open Greenland Sea. Beside the predominant mixed-phase clouds pure liquid water and ice clouds were observed. Utilizing measurements of solar radiation reflected by the clouds three methods to retrieve the thermodynamic phase of the cloud are introduced and compared. Two ice indices IS and IP were obtained by analyzing the spectral pattern of the cloud top reflectance in the near infrared (1500–1800 nm wavelength spectral range which is characterized by ice and water absorption. While IS analyzes the spectral slope of the reflectance in this wavelength range, IS utilizes a principle component analysis (PCA of the spectral reflectance. A third ice index IA is based on the different side scattering of spherical liquid water particles and nonspherical ice crystals which was recorded in simultaneous measurements of spectral cloud albedo and reflectance.

    Radiative transfer simulations show that IS, IP and IA range between 5 to 80, 0 to 8 and 1 to 1.25 respectively with lowest values indicating pure liquid water clouds and highest values pure ice clouds. The spectral slope ice index IS and the PCA ice index IP are found to be strongly sensitive to the effective diameter of the ice crystals present in the cloud. Therefore, the identification of mixed-phase clouds requires a priori knowledge of the ice crystal dimension. The reflectance-albedo ice index IA is mainly dominated by the uppermost cloud layer (τ<1.5. Therefore, typical boundary-layer mixed-phase clouds with a liquid cloud top layer will

  5. SOA approach to battle command: simulation interoperability (United States)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.


    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  6. Simulation for spectral response of solar-blind AlGaN based p-i-n photodiodes (United States)

    Xue, Shiwei; Xu, Jintong; Li, Xiangyang


    In this article, we introduced how to build a physical model of refer to the device structure and parameters. Simulations for solar-blind AlGaN based p-i-n photodiodes spectral characteristics were conducted in use of Silvaco TCAD, where device structure and parameters are comprehensively considered. In simulation, the effects of polarization, Urbach tail, mobility, saturated velocities and lifetime in AlGaN device was considered. Especially, we focused on how the concentration-dependent Shockley-Read-Hall (SRH) recombination model affects simulation results. By simulating, we analyzed the effects in spectral response caused by TAUN0 and TAUP0, and got the values of TAUN0 and TAUP0 which can bring a result coincides with test results. After that, we changed their values and made the simulation results especially the part under 255 nm performed better. In conclusion, the spectral response between 200 nm and 320 nm of solar-blind AlGaN based p-i-n photodiodes were simulated and compared with test results. We also found that TAUN0 and TAUP0 have a large impact on spectral response of AlGaN material.

  7. A numerical spectral approach to solve the dislocation density transport equation

    International Nuclear Information System (INIS)

    Djaka, K S; Taupin, V; Berbenni, S; Fressengeas, C


    A numerical spectral approach is developed to solve in a fast, stable and accurate fashion, the quasi-linear hyperbolic transport equation governing the spatio-temporal evolution of the dislocation density tensor in the mechanics of dislocation fields. The approach relies on using the Fast Fourier Transform algorithm. Low-pass spectral filters are employed to control both the high frequency Gibbs oscillations inherent to the Fourier method and the fast-growing numerical instabilities resulting from the hyperbolic nature of the transport equation. The numerical scheme is validated by comparison with an exact solution in the 1D case corresponding to dislocation dipole annihilation. The expansion and annihilation of dislocation loops in 2D and 3D settings are also produced and compared with finite element approximations. The spectral solutions are shown to be stable, more accurate for low Courant numbers and much less computation time-consuming than the finite element technique based on an explicit Galerkin-least squares scheme. (paper)

  8. Validation of the SCEC broadband platform V14.3 simulation methods using pseudo spectral acceleration data (United States)

    Dreger, Douglas S.; Beroza, Gregory C.; Day, Steven M.; Goulet, Christine A.; Jordan, Thomas H; Spudich, Paul A.; Stewart, Jonathan P.


    This paper summarizes the evaluation of ground motion simulation methods implemented on the SCEC Broadband Platform (BBP), version 14.3 (as of March 2014). A seven-member panel, the authorship of this article, was formed to evaluate those methods for the prediction of pseudo-­‐spectral accelerations (PSAs) of ground motion. The panel’s mandate was to evaluate the methods using tools developed through the validation exercise (Goulet et al. ,2014), and to define validation metrics for the assessment of the methods’ performance. This paper summarizes the evaluation process and conclusions from the panel. The five broadband, finite-source simulation methods on the BBP include two deterministic approaches herein referred to as CSM (Anderson, 2014) and UCSB (Crempien and Archuleta, 2014); a band-­‐limited stochastic white noise method called EXSIM (Atkinson and Assatourians, 2014); and two hybrid approaches, referred to as G&P (Graves and Pitarka, 2014) and SDSU (Olsen and Takedatsu, 2014), which utilize a deterministic Green’s function approach for periods longer than 1 second and stochastic methods for periods shorter than 1 second. Two acceptance tests were defined to validate the broadband finite‐source ground methods (Goulet et al., 2014). Part A compared observed and simulated PSAs for periods from 0.01 to 10 seconds for 12 moderate to large earthquakes located in California, Japan, and the eastern US. Part B compared the median simulated PSAs to published NGA-­‐West1 (Abrahamson and Silva, 2008; Boore and Atkinson, 2008; Campbell and Bozorgnia, 2008; and Chiou and Youngs, 2008) ground motion prediction equations (GMPEs) for specific magnitude and distance cases using a pass-­‐fail criteria based on a defined acceptable range around the spectral shape of the GMPEs. For the initial Part A and Part B validation exercises during the summer of 2013, the software for the five methods was locked in at version 13.6 (see Maechling et al., 2014). In the

  9. Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach (United States)

    Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.


    Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.

  10. Hyper-Spectral Networking Concept of Operations and Future Air Traffic Management Simulations (United States)

    Davis, Paul; Boisvert, Benjamin


    The NASA sponsored Hyper-Spectral Communications and Networking for Air Traffic Management (ATM) (HSCNA) project is conducting research to improve the operational efficiency of the future National Airspace System (NAS) through diverse and secure multi-band, multi-mode, and millimeter-wave (mmWave) wireless links. Worldwide growth of air transportation and the coming of unmanned aircraft systems (UAS) will increase air traffic density and complexity. Safe coordination of aircraft will require more capable technologies for communications, navigation, and surveillance (CNS). The HSCNA project will provide a foundation for technology and operational concepts to accommodate a significantly greater number of networked aircraft. This paper describes two of the HSCNA projects technical challenges. The first technical challenge is to develop a multi-band networking concept of operations (ConOps) for use in multiple phases of flight and all communication link types. This ConOps will integrate the advanced technologies explored by the HSCNA project and future operational concepts into a harmonized vision of future NAS communications and networking. The second technical challenge discussed is to conduct simulations of future ATM operations using multi-bandmulti-mode networking and technologies. Large-scale simulations will assess the impact, compared to todays system, of the new and integrated networks and technologies under future air traffic demand.

  11. Quantum molecular dynamics and spectral simulation of a boron impurity in solid para-hydrogen (United States)

    Krumrine, Jennifer R.; Jang, Soonmin; Alexander, Millard H.; Voth, Gregory A.


    Using path-integral molecular dynamics, we investigate the equilibrium properties of a boron impurity trapped in solid para-hydrogen. Because of its singly filled 2p orbital, the B atom interacts anisotropically with the pH2 molecules in the matrix. To assess the effect of this electronic anisotropy, we compare with similar simulations in which an orientation-averaged B-H2 potential is used. We investigate three matrices: (a) a single B atom site substituted for a pH2 molecule, (b) a similar site-substituted matrix with a nearest-neighbor vacancy, and (c) a B atom site substituted not in the bulk but near the pH2 surface. It is found that small distortions of the lattice occur to permit an energetically favorable orientation of the 2p orbital, even in the absence of a vacancy. When the B impurity is located near the surface, the spherically-averaged potential provides a noticeably different description from the case of the anisotropic potential. The 3s←2p absorption spectra of the B chromophore is also predicted by means of a semiclassical Franck-Condon technique using path integrals to sample the quantum lattice configurations. These spectral simulations provide additional insight into the interpretation of experimental observations of trapped B in a solid pH2 matrix.

  12. Statistical learning method in regression analysis of simulated positron spectral data

    International Nuclear Information System (INIS)

    Avdic, S. Dz.


    Positron lifetime spectroscopy is a non-destructive tool for detection of radiation induced defects in nuclear reactor materials. This work concerns the applicability of the support vector machines method for the input data compression in the neural network analysis of positron lifetime spectra. It has been demonstrated that the SVM technique can be successfully applied to regression analysis of positron spectra. A substantial data compression of about 50 % and 8 % of the whole training set with two and three spectral components respectively has been achieved including a high accuracy of the spectra approximation. However, some parameters in the SVM approach such as the insensitivity zone e and the penalty parameter C have to be chosen carefully to obtain a good performance. (author)

  13. A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries

    Directory of Open Access Journals (Sweden)

    Ivonne Bazan


    Full Text Available By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime.

  14. A New High-Resolution Spectral Approach to Noninvasively Evaluate Wall Deformations in Arteries (United States)

    Bazan, Ivonne; Negreira, Carlos; Ramos, Antonio; Brum, Javier; Ramirez, Alfredo


    By locally measuring changes on arterial wall thickness as a function of pressure, the related Young modulus can be evaluated. This physical magnitude has shown to be an important predictive factor for cardiovascular diseases. For evaluating those changes, imaging segmentation or time correlations of ultrasonic echoes, coming from wall interfaces, are usually employed. In this paper, an alternative low-cost technique is proposed to locally evaluate variations on arterial walls, which are dynamically measured with an improved high-resolution calculation of power spectral densities in echo-traces of the wall interfaces, by using a parametric autoregressive processing. Certain wall deformations are finely detected by evaluating the echoes overtones peaks with power spectral estimations that implement Burg and Yule Walker algorithms. Results of this spectral approach are compared with a classical cross-correlation operator, in a tube phantom and “in vitro” carotid tissue. A circulating loop, mimicking heart periods and blood pressure changes, is employed to dynamically inspect each sample with a broadband ultrasonic probe, acquiring multiple A-Scans which are windowed to isolate echo-traces packets coming from distinct walls. Then the new technique and cross-correlation operator are applied to evaluate changing parietal deformations from the detection of displacements registered on the wall faces under periodic regime. PMID:24688596

  15. Effect of method and parameters of spectral analysis on selected indices of simulated Doppler spectra. (United States)

    Kaluzynski, K; Palko, T


    The sensitivity of Doppler spectral indices (mean frequency, maximum frequency, spectral broadening index and turbulence intensity) to the conditions of spectral analysis (estimation method, data window, smoothing window or model order) increases with decreasing signal bandwidth and growing index complexity. The bias of spectral estimate has a more important effect on these indices than its variance. A too low order, in the case of autoregressive modeling and minimum variance methods, and excessive smoothing, in the case of the FFT method, result in increased errors of Doppler spectral indices. There is a trade-off between the errors resulting from a short data window and those due to insufficient temporal resolution.

  16. [Study of approaches to spectral reflectance reconstruction based on digital camera]. (United States)

    Yang, Ping; Liao, Ning-fang; Song, Hong


    It is still challenging to reconstruct the spectral reflectance of a surface using digital cameras under given luminance and observation conditions. A new approach to solving the problem which is based on neural network and basis vectors is proposed. At first, the spectral reflectance of the sample surface is measured by spectrometer and the response of an digital camera is recorded. Then the reflectance is represented as a linear combination of several basis vectors by singular value decomposition (SVD). After that, a neural network is trained so that it is able to approximate the relationship between the camera responses and the coefficients of basis vectors accurately. In the end, the spectral reflectance can be reconstructed based on the neural network and basis vectors. In the present paper, the authors reconstructed the spectrum reflectance based on neural network and basis vectors. Compared with other traditional methods, neural network expands the space of unknown function F(S) from linear functions to more general nonlinear functions, which gives more accurate estimation of the coefficients alphak and better reflectance reconstruction. Results show that the reflectance of standard Munsell color patch (Matte) can be reconstructed successfully with mean of RMS which is 0.0234. Compared with linear approximation method, reconstruction of standard Munsell color patch (Matte) using this approach reduces the reconstruction error by 67%. Since the neural network can be implemented by Matlab neural network toolbox, this method can be easily adopted in many other cases. Therefore we conclude that this approach has advantages of higher accuracy, easy implementation and adaptation, thus can be used in many applications.

  17. A spectral approach for the quantitative description of cardiac collagen network from nonlinear optical imaging. (United States)

    Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia


    The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.

  18. The assimilation of spectral sensing and the WOFOST model for the dynamic simulation of cadmium accumulation in rice tissues (United States)

    Wu, Ling; Liu, Xiangnan; Wang, Ping; Zhou, Botian; Liu, Meiling; Li, Xuqing


    The accurate detection of heavy metal-induced stress on crop growth is important for food security and agricultural, ecological and environmental protection. Spectral sensing offers an efficient and undamaged observation tool to monitor soil and vegetation contamination. This study proposed a methodology for dynamically estimating the total cadmium (Cd) accumulation in rice tissues by assimilating spectral information into WOFOST (World Food Study) model. Based on the differences among ground hyperspectral data of rice in three experiments fields under different Cd concentration levels, the spectral indices MCARI1, NREP and RH were selected to reflect the rice stress condition and dry matter production of rice. With assimilating these sensitive spectral indices into the WOFOST + PROSPECT + SAIL model to optimize the Cd pollution stress factor fwi, the dynamic dry matter production processes of rice were adjusted. Based on the relation between dry matter production and Cd accumulation, we dynamically simulating the Cd accumulation in rice tissues. The results showed that the method performed well in dynamically estimating the total amount of Cd accumulation in rice tissues with R2 over 85%. This study suggests that the proposed method of integrating the spectral information and the crop growth model could successfully dynamically simulate the Cd accumulation in rice tissues.

  19. Design and simulations of a spectral efficient optical code division multiple access scheme using alternated energy differentiation and single-user soft-decision demodulation (United States)

    A. Garba, Aminata


    This paper presents a new approach to optical Code Division Multiple Access (CDMA) network transmission scheme using alternated amplitude sequences and energy differentiation at the transmitters to allow concurrent and secure transmission of several signals. The proposed system uses error control encoding and soft-decision demodulation to reduce the multi-user interference at the receivers. The design of the proposed alternated amplitude sequences, the OCDMA energy modulators and the soft decision, single-user demodulators are also presented. Simulation results show that the proposed scheme allows achieving spectral efficiencies higher than several reported results for optical CDMA and much higher than the Gaussian CDMA capacity limit.

  20. A spectral k-means approach to bright-field cell image segmentation. (United States)

    Bradbury, Laura; Wan, Justin W L


    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  1. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David; Thompson, Sandra E.


    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  2. The experimental vibrational infrared spectrum of lemon peel and simulation of spectral properties of the plant cell wall (United States)

    Berezin, K. V.; Shagautdinova, I. T.; Chernavina, M. L.; Novoselova, A. V.; Dvoretskii, K. N.; Likhter, A. M.


    The experimental vibrational IR spectra of the outer part of lemon peel are recorded in the range of 3800-650 cm-1. The effect of artificial and natural dehydration of the peel on its vibrational spectrum is studied. It is shown that the colored outer layer of lemon peel does not have a noticeable effect on the vibrational spectrum. Upon 28-day storage of a lemon under natural laboratory conditions, only sequential dehydration processes are reflected in the vibrational spectrum of the peel. Within the framework of the theoretical DFT/B3LYP/6-31G(d) method, a model of a plant cell wall is developed consisting of a number of polymeric molecules of dietary fibers like cellulose, hemicellulose, pectin, lignin, some polyphenolic compounds (hesperetin glycoside-flavonoid), and a free water cluster. Using a supermolecular approach, the spectral properties of the wall of a lemon peel cell was simulated, and a detailed theoretical interpretation of the recorded vibrational spectrum is given.

  3. Gas turbine system simulation: An object-oriented approach (United States)

    Drummond, Colin K.; Follen, Gregory J.; Putt, Charles W.


    A prototype gas turbine engine simulation has been developed that offers a generalized framework for the simulation of engines subject to steady-state and transient operating conditions. The prototype is in preliminary form, but it successfully demonstrates the viability of an object-oriented approach for generalized simulation applications. Although object oriented programming languages are-relative to FORTRAN-somewhat austere, it is proposed that gas turbine simulations of an interdisciplinary nature will benefit significantly in terms of code reliability, maintainability, and manageability. This report elucidates specific gas turbine simulation obstacles that an object-oriented framework can overcome and describes the opportunity for interdisciplinary simulation that the approach offers.

  4. Quasistatic field simulations based on finite elements and spectral methods applied to superconducting magnets

    International Nuclear Information System (INIS)

    Koch, Stephan


    This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The

  5. Simulative Investigation on Spectral Efficiency of Unipolar Codes based OCDMA System using Importance Sampling Technique (United States)

    Farhat, A.; Menif, M.; Rezig, H.


    This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.

  6. Spectral Unmixing of Forest Crown Components at Close Range, Airborne and Simulated Sentinel-2 and EnMAP Spectral Imaging Scale

    Directory of Open Access Journals (Sweden)

    Anne Clasen


    Full Text Available Forest biochemical and biophysical variables and their spatial and temporal distribution are essential inputs to process-orientated ecosystem models. To provide this information, imaging spectroscopy appears to be a promising tool. In this context, the present study investigates the potential of spectral unmixing to derive sub-pixel crown component fractions in a temperate deciduous forest ecosystem. However, the high proportion of foliage in this complex vegetation structure leads to the problem of saturation effects, when applying broadband vegetation indices. This study illustrates that multiple endmember spectral mixture analysis (MESMA can contribute to overcoming this challenge. Reference fractional abundances, as well as spectral measurements of the canopy components, could be precisely determined from a crane measurement platform situated in a deciduous forest in North-East Germany. In contrast to most other studies, which only use leaf and soil endmembers, this experimental setup allowed for the inclusion of a bark endmember for the unmixing of components within the canopy. This study demonstrates that the inclusion of additional endmembers markedly improves the accuracy. A mean absolute error of 7.9% could be achieved for the fractional occurrence of the leaf endmember and 5.9% for the bark endmember. In order to evaluate the results of this field-based study for airborne and satellite-based remote sensing applications, a transfer to Airborne Imaging Spectrometer for Applications (AISA and simulated Environmental Mapping and Analysis Program (EnMAP and Sentinel-2 imagery was carried out. All sensors were capable of unmixing crown components with a mean absolute error ranging between 3% and 21%.

  7. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method (United States)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.


    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  8. A New Spectral Shape-Based Record Selection Approach Using Np and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Edén Bojórquez


    Full Text Available With the aim to improve code-based real records selection criteria, an approach inspired in a parameter proxy of spectral shape, named Np, is analyzed. The procedure is based on several objectives aimed to minimize the record-to-record variability of the ground motions selected for seismic structural assessment. In order to select the best ground motion set of records to be used as an input for nonlinear dynamic analysis, an optimization approach is applied using genetic algorithms focuse on finding the set of records more compatible with a target spectrum and target Np values. The results of the new Np-based approach suggest that the real accelerograms obtained with this procedure, reduce the scatter of the response spectra as compared with the traditional approach; furthermore, the mean spectrum of the set of records is very similar to the target seismic design spectrum in the range of interest periods, and at the same time, similar Np values are obtained for the selected records and the target spectrum.

  9. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama


    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  10. Total ozone retrieval from GOME UV spectral data using the weighting function DOAS approach

    Directory of Open Access Journals (Sweden)

    M. Coldewey-Egbers


    Full Text Available A new algorithm approach called Weighting Function Differential Optical Absorption Spectroscopy (WFDOAS is presented which has been developed to retrieve total ozone columns from nadir observations of the Global Ozone Monitoring Experiment. By fitting the vertically integrated ozone weighting function rather than ozone cross-section to the sun-normalized radiances, a direct retrieval of vertical column amounts is possible. The new WFDOAS approach takes into account the slant path wavelength modulation that is usually neglected in the standard DOAS approach using single airmass factors. This paper focuses on the algorithm description and error analysis, while in a companion paper by Weber et al. (2004 a detailed validation with groundbased measurements is presented. For the first time several auxiliary quantities directly derived from the GOME spectral range such as cloud-top-height and cloud fraction (O2-A band and effective albedo using the Lambertian Equivalent Reflectivity (LER near 377nm are used in combination as input to the ozone retrieval. In addition the varying ozone dependent contribution to the Raman correction in scattered light known as Ring effect has been included. The molecular ozone filling-in that is accounted for in the new algorithm has the largest contribution to the improved total ozone results from WFDOAS compared to the operational product. The precision of the total ozone retrieval is estimated to be better than 3% for solar zenith angles below 80°.

  11. Spectral methods for the uncertainties propagation in numerical simulation; Methodes spectrales robustes pour la propagation d'incertitudes en simulation numerique

    Energy Technology Data Exchange (ETDEWEB)

    Crestaux, Th. [CEA Saclay, Dept. Modelisation de Systemes et Structures (DEN/DANS/DM2S/SFME), 91 - Gif sur Yvette (France)


    The context of this thesis is the development of the numerical simulation in industrial processes. It aims to study and develop methods allowing a decrease of the numerical cost of calculi of Chaos Polynomials development. The implementing concerns problems of high stochastic dimension and more particularly the transport model of radionuclides in radioactive wastes disposal. (A.L.B.)

  12. A phased approach to enable hybrid simulation of complex structures (United States)

    Spencer, Billie F.; Chang, Chia-Ming; Frankie, Thomas M.; Kuchma, Daniel A.; Silva, Pedro F.; Abdelnaby, Adel E.


    Hybrid simulation has been shown to be a cost-effective approach for assessing the seismic performance of structures. In hybrid simulation, critical parts of a structure are physically tested, while the remaining portions of the system are concurrently simulated computationally, typically using a finite element model. This combination is realized through a numerical time-integration scheme, which allows for investigation of full system-level responses of a structure in a cost-effective manner. However, conducting hybrid simulation of complex structures within large-scale testing facilities presents significant challenges. For example, the chosen modeling scheme may create numerical inaccuracies or even result in unstable simulations; the displacement and force capacity of the experimental system can be exceeded; and a hybrid test may be terminated due to poor communication between modules (e.g., loading controllers, data acquisition systems, simulation coordinator). These problems can cause the simulation to stop suddenly, and in some cases can even result in damage to the experimental specimens; the end result can be failure of the entire experiment. This study proposes a phased approach to hybrid simulation that can validate all of the hybrid simulation components and ensure the integrity large-scale hybrid simulation. In this approach, a series of hybrid simulations employing numerical components and small-scale experimental components are examined to establish this preparedness for the large-scale experiment. This validation program is incorporated into an existing, mature hybrid simulation framework, which is currently utilized in the Multi-Axial Full-Scale Sub-Structuring Testing and Simulation (MUST-SIM) facility of the George E. Brown Network for Earthquake Engineering Simulation (NEES) equipment site at the University of Illinois at Urbana-Champaign. A hybrid simulation of a four-span curved bridge is presented as an example, in which three piers are

  13. Combining the ensemble and Franck-Condon approaches for calculating spectral shapes of molecules in solution (United States)

    Zuehlsdorff, T. J.; Isborn, C. M.


    The correct treatment of vibronic effects is vital for the modeling of absorption spectra of many solvated dyes. Vibronic spectra for small dyes in solution can be easily computed within the Franck-Condon approximation using an implicit solvent model. However, implicit solvent models neglect specific solute-solvent interactions on the electronic excited state. On the other hand, a straightforward way to account for solute-solvent interactions and temperature-dependent broadening is by computing vertical excitation energies obtained from an ensemble of solute-solvent conformations. Ensemble approaches usually do not account for vibronic transitions and thus often produce spectral shapes in poor agreement with experiment. We address these shortcomings by combining zero-temperature vibronic fine structure with vertical excitations computed for a room-temperature ensemble of solute-solvent configurations. In this combined approach, all temperature-dependent broadening is treated classically through the sampling of configurations and quantum mechanical vibronic contributions are included as a zero-temperature correction to each vertical transition. In our calculation of the vertical excitations, significant regions of the solvent environment are treated fully quantum mechanically to account for solute-solvent polarization and charge-transfer. For the Franck-Condon calculations, a small amount of frozen explicit solvent is considered in order to capture solvent effects on the vibronic shape function. We test the proposed method by comparing calculated and experimental absorption spectra of Nile red and the green fluorescent protein chromophore in polar and non-polar solvents. For systems with strong solute-solvent interactions, the combined approach yields significant improvements over the ensemble approach. For systems with weak to moderate solute-solvent interactions, both the high-energy vibronic tail and the width of the spectra are in excellent agreement with

  14. Spectral Induced Polarization approaches to characterize reactive transport parameters and processes (United States)

    Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.


    For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly

  15. A modular approach to simulator architecture

    International Nuclear Information System (INIS)

    Ray, R.N.


    A modular design of hardware and software for power plant training simulators is discussed. The hardware consists of a multicomputer configuration using TDC-316 minicomputers with shared memory. Model software, which represents the major share of software development effort is developed using Model Statement Language (MSL). Salient features of MSL are also discussed. (auth.)

  16. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)


    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs


    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook


    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  18. Analyzing the Chemical and Spectral Effects of Pulsed Laser Irradiation to Simulate Space Weathering of a Carbonaceous Chondrite (United States)

    Thompson, M. S.; Keller, L. P.; Christoffersen, R.; Loeffler, M. J.; Morris, R. V.; Graff, T. G.; Rahman, Z.


    Space weathering processes alter the chemical composition, microstructure, and spectral characteristics of material on the surfaces of airless bodies. The mechanisms driving space weathering include solar wind irradiation and the melting, vaporization and recondensation effects associated with micrometeorite impacts e.g., [1]. While much work has been done to understand space weathering of lunar and ordinary chondritic materials, the effects of these processes on hydrated carbonaceous chondrites is poorly understood. Analysis of space weathering of carbonaceous materials will be critical for understanding the nature of samples returned by upcoming missions targeting primitive, organic-rich bodies (e.g., OSIRIS-REx and Hayabusa 2). Recent experiments have shown the spectral properties of carbonaceous materials and associated minerals are altered by simulated weathering events e.g., [2-5]. However, the resulting type of alteration i.e., reddening vs. bluing of the reflectance spectrum, is not consistent across all experiments [2-5]. In addition, the microstructural and crystal chemical effects of many of these experiments have not been well characterized, making it difficult to attribute spectral changes to specific mineralogical or chemical changes in the samples. Here we report results of a pulsed laser irradiation experiment on a chip of the Murchison CM2 carbonaceous chondrite to simulate micrometeorite impact processing.

  19. A framework for efficient irregular wave simulations using Higher Order Spectral method coupled with viscous two phase model

    Directory of Open Access Journals (Sweden)

    Inno Gatin


    Full Text Available In this paper a framework for efficient irregular wave simulations using Higher Order Spectral method coupled with fully nonlinear viscous, two-phase Computational Fluid Dynamics (CFD model is presented. CFD model is based on solution decomposition via Spectral Wave Explicit Navier–Stokes Equation method, allowing efficient coupling with arbitrary potential flow solutions. Higher Order Spectrum is a pseudo-spectral, potential flow method for solving nonlinear free surface boundary conditions up to an arbitrary order of nonlinearity. It is capable of efficient long time nonlinear propagation of arbitrary input wave spectra, which can be used to obtain realistic extreme waves. To facilitate the coupling strategy, Higher Order Spectrum method is implemented in foam-extend alongside the CFD model. Validation of the Higher Order Spectrum method is performed on three test cases including monochromatic and irregular wave fields. Additionally, the coupling between Higher Order Spectrum and CFD is validated on three hour irregular wave propagation. Finally, a simulation of a 3D extreme wave encountering a full scale container ship is shown.

  20. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles


    Male , Jean-Michel; Fezoui , Loula ,


    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  1. Validation of the spectral mismatch correction factor using an LED-based solar simulator

    DEFF Research Database (Denmark)

    Riedel, Nicholas; Santamaria Lancia, Adrian Alejo; Thorsteinsson, Sune

    -halide light sources provide. In this work we will use an EcoSun10L LED module tester from Ecoprogetti to perform short circuit current (ISC) measurements under various class A, B and C spectra. We will apply a spectral mismatch correction to the measured ISC under each test spectrum per IEC 60904-7. In all...... scenarios, a small area mono-Si cell is used the reference cell and a similar mono-Si cell is used as the PV device under test (DUT). Finally, we quantify the variation of the DUT’s measured and spectrally corrected Isc under the class A, B and C test spectra....

  2. Approaching Sentient Building Performance Simulation Systems

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas; Heller, Alfred


    Sentient BPS systems can combine one or more high precision BPS and provide near instantaneous performance feedback directly in the design tool, thus providing speed and precision of building performance in the early design stages. Sentient BPS systems are essentially combining: 1) design tools, 2......) parametric tools, 3) BPS tools, 4) dynamic databases 5) interpolation techniques and 6) prediction techniques as a fast and valid simulation system, in the early design stage....


    Energy Technology Data Exchange (ETDEWEB)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Neben, Abraham R. [MIT Kavli Institute for Cosmological Physics, Cambridge, MA, 02139 (United States); Bradley, Richard; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia [National Radio Astronomy Observatory, Charlottesville, VA (United States); Deboer, David; Parsons, Aaron; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S. [Department of Astronomy, University of California, Berkeley, CA (United States); Aguirre, James [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Bowman, Judd; Thyagarajan, Nithyanandan [Arizona State University, School of Earth and Space Exploration, Tempe, AZ 85287 (United States); Venter, Mariet [Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, SA (South Africa); Acedo, Eloy de Lera [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); and others


    We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish’s design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA’s suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA’s ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at a relatively low cost. We find that electromagnetic resonances in the HERA feed’s cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays (-40 dB at 200 ns), which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below k {sub ∥} ≲ 0.2 h Mpc{sup -1}, in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization.

  4. Time-dependent algorithms for the simulation of viscoelastic flows with spectral element methods: applications and stability

    International Nuclear Information System (INIS)

    Fietier, Nicolas; Deville, Michel O.


    This paper presents the development of spectral element methods to simulate unsteady flows of viscoelastic fluids using a closed-form differential constitutive equation. The generation and decay Poiseuille planar flows are considered as benchmark problems to test the abilities of our computational method to deal with truly time-dependent flows. Satisfactory results converging toward steady-state regimes have been obtained for the flow through a four-to-one planar abrupt contraction with unsteady algorithms. Time-dependent simulations of viscoelastic flows are prone to numerical instabilities even for simple geometrical configurations. Possible methods to improve the numerical stability of the computational algorithms are discussed in view of the results carried out with numerical simulations for the flows through a straight channel and the four-to-one contraction

  5. Simulation tools for scattering corrections in spectrally resolved X-ray Computed Tomography using McXtrace

    DEFF Research Database (Denmark)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.


    -ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate...... the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single...... PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially...

  6. Interferometric vs Spectral IASI Radiances: Effective Data-Reduction Approaches for the Satellite Sounding of Atmospheric Thermodynamical Parameters

    Directory of Open Access Journals (Sweden)

    Giuseppe Grieco


    Full Text Available Abstract: Two data-reduction approaches for the Infrared Atmospheric Sounder Interferometer satellite instrument are discussed and compared. The approaches are intended for the purpose of devising and implementing fast near real time retrievals of atmospheric thermodynamical parameters. One approach is based on the usual selection of sparse channels or portions of the spectrum. This approach may preserve the spectral resolution, but at the expense of the spectral coverage. The second approach considers a suitable truncation of the interferogram (the Fourier transform of the spectrum at points below the nominal maximum optical path difference. This second approach is consistent with the Shannon-Whittaker sampling theorem, preserves the full spectral coverage, but at the expense of the spectral resolution. While the first data-reduction acts within the spectraldomain, the second can be performed within the interferogram domain and without any specific need to go back to the spectral domain for the purpose of retrieval. To assess the impact of these two different data-reduction strategies on retrieval of atmospheric parameters, we have used a statistical retrieval algorithm for skin temperature, temperature, water vapour and ozone profiles. The use of this retrieval algorithm is mostly intended for illustrative purposes and the user could choose a different inverse strategy. In fact, the interferogram-based data-reduction strategy is generic and independent of any inverse algorithm. It will be also shown that this strategy yields subset of interferometric radiances, which are less sensitive to potential interfering effects such as those possibly introduced by the day-night cycle (e.g., the solar component, and spectroscopic effect induced by sun energy and unknown trace gases variability.

  7. A Monte Carlo simulation of scattering reduction in spectral x-ray computed tomography

    DEFF Research Database (Denmark)

    Busi, Matteo; Olsen, Ulrik Lund; Bergbäck Knudsen, Erik


    photons, enabling spectral analysis of X-ray images. This technique is useful to extract efficiently more information on energy dependent quantities (e.g. mass attenuations coefficients) and study matter interactions (e.g. X-ray scattering, photoelectric absorption, etc...). Having a good knowledge...

  8. Hidden Statistics Approach to Quantum Simulations (United States)

    Zak, Michail


    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the

  9. Simulation approach towards energy flexible manufacturing systems

    CERN Document Server

    Beier, Jan


    This authored monograph provides in-depth analysis and methods for aligning electricity demand of manufacturing systems to VRE supply. The book broaches both long-term system changes and real-time manufacturing execution and control, and the author presents a concept with different options for improved energy flexibility including battery, compressed air and embodied energy storage. The reader will also find a detailed application procedure as well as an implementation into a simulation prototype software. The book concludes with two case studies. The target audience primarily comprises research experts in the field of green manufacturing systems. .

  10. Reusable Component Model Development Approach for Parallel and Distributed Simulation (United States)

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng


    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  11. Open Source Approach to Urban Growth Simulation (United States)

    Petrasova, A.; Petras, V.; Van Berkel, D.; Harmon, B. A.; Mitasova, H.; Meentemeyer, R. K.


    Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban - Regional Environment Simulation) - a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.


    Directory of Open Access Journals (Sweden)

    A. Petrasova


    Full Text Available Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban – Regional Environment Simulation – a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.

  13. Residents’ perceptions of simulation as a clinical learning approach (United States)

    Walsh, Catharine M.; Garg, Ankit; Ng, Stella L.; Goyal, Fenny; Grover, Samir C.


    Background Simulation is increasingly being integrated into medical education; however, there is little research into trainees’ perceptions of this learning modality. We elicited trainees’ perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. Methods We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4–6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Results Residents’ perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents’ markedly narrow perception of simulation’s capacity to support non-technical skills development or its use beyond introductory learning. Conclusion Trainees’ learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees’ a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation. PMID:28344719

  14. Simulation Approach to Mission Risk and Reliability Analysis, Phase I (United States)

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  15. A generalized 2D and 3D white LED device simulator integrating photon recycling and luminescent spectral conversion effects (United States)

    Ng, Wei-Choon; Letay, Gergö


    We report new capabilities in our Sentaurus-Device1 simulator for modeling arbitrarily shaped 2D/3D white LEDs by coupling novel photon recycling, luminescent spectral conversion effects and electrical transport self consistently. In our simulator, the spontaneous emission spectra are embedded in ray tracing, and are allowed to evolve as the rays traverse regions of stimulated gain, absorption, and luminescence. In the active quantum well (QW), the spontaneous emission spectrum can be partially amplified by stimulated gain within a certain energy range and absorbed at higher energies, resulting in a modified spontaneous spectrum. The amplified and absorbed parts of the spectrum give a net recombination/generation rate that is feedback to the electrical transport via the continuity equations. This conceives a novel photon recycling model that includes amplified spontaneous emission. The modified spontaneous spectrum can further be altered by spectral conversion in the luminescent region. In this manner, we capture the important physical effects in white LED structures in a fully coupled and self-consistent electro-opto-thermal simulation.

  16. Simulations of a spectral gamma-ray logging tool response to a surface source distribution on the borehole wall

    International Nuclear Information System (INIS)

    Wilson, R.D.; Conaway, J.G.


    We have developed Monte Carlo and discrete ordinates simulation models for the large-detector spectral gamma-ray (SGR) logging tool in use at the Nevada Test Site. Application of the simulation models produced spectra for source layers on the borehole wall, either from potassium-bearing mudcakes or from plate-out of radon daughter products. Simulations show that the shape and magnitude of gamma-ray spectra from sources distributed on the borehole wall depend on radial position with in the air-filled borehole as well as on hole diameter. No such dependence is observed for sources uniformly distributed in the formation. In addition, sources on the borehole wall produce anisotropic angular fluxes at the higher scattered energies and at the source energy. These differences in borehole effects and in angular flux are important to the process of correcting SGR logs for the presence of potassium mudcakes; they also suggest a technique for distinguishing between spectral contributions from formation sources and sources on the borehole wall. These results imply the existence of a standoff effect not present for spectra measured in air-filled boreholes from formation sources. 5 refs., 11 figs

  17. Experimental study and numerical simulations of the spectral properties of XUV lasers pumped by collisional excitation

    International Nuclear Information System (INIS)

    Meng, L.


    Improving the knowledge of the spectral and temporal properties of plasma-based XUV lasers is an important issue for the ongoing development of these sources towards significantly higher peak power. The spectral properties of the XUV laser line actually control several physical quantities that are important for applications, such as the minimum duration that can be achieved (Fourier-transform limit). The shortest duration experimentally achieved to-date is ∼1 picosecond. The demonstrated technique of seeding XUV laser plasmas with a coherent femtosecond pulse of high-order harmonic radiation opens new and promising prospects to reduce the duration to a few 100 fs, provided that the gain bandwidth can be kept large enough.XUV lasers pumped by collisional excitation of Ni-like and Ne-like ions have been developed worldwide in hot plasmas created either by fast electrical discharge, or by various types of high-power lasers. This leads to a variety of XUV laser sources with distinct output properties, but also markedly different plasma parameters (density, temperature) in the amplification zone. Hence different spectral properties are expected. The purpose of our work was then to investigate the spectral behaviour of the different types of existing collisional excitation XUV lasers, and to evaluate their potential to support amplification of pulses with duration below 1 ps in a seeded mode.The spectral characterization of plasma-based XUV lasers is challenging because the extremely narrow bandwidth (typically Δλ/λ ∼10 -5 ) lies beyond the resolution limit of existing spectrometers in this spectral range. In our work the narrow linewidth was resolved using a wavefront-division interferometer specifically designed to measure temporal coherence, from which the spectral linewidth is inferred. We have characterized three types of collisional XUV lasers, developed in three different laboratories: transient pumping in Ni-like Mo, capillary discharge pumping in Ne

  18. Toward a More Just Approach to Poverty Simulations (United States)

    Browne, Laurie P.; Roll, Susan


    Poverty simulations are a promising approach to engaging college students in learning about poverty because they provide direct experience with this critical social issue. Much of the extant scholarship on simulations describe them as experiential learning; however, it appears that educators do not examine biases, assumptions, and traditions of…

  19. A stochastic simulation approach for production scheduling and ...

    African Journals Online (AJOL)

    The present paper aims to develop a simulation tool for tile manufacturing companies. The paper shows how simulation approach can be useful to support management decisions related to production scheduling and investment planning. Particularly the aim is to demonstrate the importance of an information system in tile ...

  20. A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring (United States)

    Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia


    One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could

  1. Digital simulation of an arbitrary stationary stochastic process by spectral representation

    DEFF Research Database (Denmark)

    Yura, Harold T.; Hanson, Steen Grüner


    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set...... of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited...... to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does...

  2. Comparisons between direct and inverse approaches in problems of recovering the true profile of a spectral line

    CERN Document Server

    Mijovic, S


    Computer-supported techniques are introduced in the evaluation of experimental data and obtaining the real profile of spectral lines. The direct and inverse approaches were used. The MINUIT program from the packets of CERN's library was used to solve direct problems. Tikhonov's regularization method was also applied to solve the same problems in an inverse manner. Model functions were introduced to check the applicability limitation of these methods and make a comparison between them as well. The advantages and disadvantages of these approaches were shown. The procedures were applied to the measured profiles of He II's spectral lines in a pulsed low-pressure arc. The chosen lines are He II Paschen-alpha (468.6 nm) in the visible region and Balmer-beta (121.5 nm) in the VUV spectral region. The range of experimental errors was determined where both approaches have given reliable results. It was found that we can obtain the real profile of He II 468.6 nm and He II 121.5 nm spectral lines, using the regularizati...

  3. LADAR Performance Simulations with a High Spectral Resolution Atmospheric Transmittance and Radiance Model-LEEDR (United States)


    and service has made my life possible. Most of all, I am especially grateful for the love and patience of my wife who sustained me with a positive...attitude through the late nights and stressful moments. May every moment together be cherished as we experience life together as a family...American Society for Testing and Materials (ASTM) 2000 extraterrestrial solar spectra is used for the solar spectral irradiance at the top of the

  4. Colour and spectral simulation of textile samples onto paper: a feasibility study


    Slavuj, Radovan; Marijanovic, Kristina; Hardeberg, Jon Yngve


    This is an Open Access article. This is the publisher’s PDF originally published in Journal of the International Colour Association: This study has investigated how the growing technology of multichannel printing and the area of spectral printing in the graphic arts could help the textile industry to communicate accurate colour. In order to reduce the cost, printed samples that serve for colour judgment and decision making...

  5. Theory and Simulation of Exoplanetary Atmospheric Haze: Giant Spectral Line Broadening (United States)

    Sadeghpour, Hossein; Felfeli, Zineb; Kharchenko, Vasili; Babb, James; Vrinceanu, Daniel


    Prominent spectral features in observed transmission spectra of exoplanets are obscured. Atmospheric haze is the leading candidate for the flattening of spectral transmission of expolanetray occultation, but also for solar system planets, Earth and cometary atmospheres. Such spectra which carry information about how the planetary atmospheres become opaque to stellar light in transit, show broad absorption where strong absorption lines from sodium or potassium and water are predicted to exist. In this work, we develop a detailed atomistic theoretical model, taking into account interaction between an atomic or molecular radiator with dust and haze particulates. Our model considers a realistic structure of haze particulates from small seed particles up to sub-micron irregularly shaped aggregates. This theory of interaction between haze and radiator particles allows to consider nearly all realistic structure, size and chemical composition of haze particulates. The computed shift and broadening of emission spectra will include both quasi-static (mean field) and collisional (pressure) shift and broadening. Our spectral calculations will be verified with available laboratory experimental data on spectra of alkali atoms in liquid droplet, solid ice, dust and dense gaseous environments. The simplicity, elegance and generality of the proposed model makes it amenable to a broad community of users in astrophysics and chemistry. The verified models can be used for analysis of emission and absorption spectra of alkali atoms from exoplanets, solar system planets, satellites and comets.

  6. [Observations of spectral data and characteristics analysis of snow-bare soil mixed pixel generated by micro-simulation]. (United States)

    Liu, Yan; Li, Yang


    To explore the differences of mixed-pixel in spectral mixing mechanism at micro-and macro -scale, the micro- simulation of snow-bare soil mixed pixel was taken as the object of study in an artificial test environment. Reflectance spectra of mixed pixel and snow, bare soil endmember with different area ratio were collected by full-band spectrometer with fixed probe distance. Qualitative and quantitative analysis of original reflectance spectra was done, and reflectance spectra form 350 to 2 500 nm and normalized reflectance spectral data of 350 to 1 815 nm excluding noise were normalized. At the same time, we collected EOS/MODIS and Environment and Disaster Monitoring Satellites data of the same period over the same area and analyzed the correlation of channels in visible, near-infrared and shortwave infrared wavelength range at different resolution scales and the relationship between spectrum of mixed snow-soil and endmember pixel in MODIS image was analyzed. The results showed that, (1) At the micro scale, non-linear relationship existed between mixed pixel and endmember within the scope of the full-wave and linear relationship existed in sub-band wavelength range; (2) At the macro scale, linear relationship existed between mixed pixel and endmember. (3) In statistics of spectral values, the correlation between snow-soil mixture and endmember is positive for snow-soil mixture and snow endmember, and is negative for snow-soil mixture and soil endmember.

  7. A multibody approach in granular dynamics simulations (United States)

    Vinogradov, O.; Sun, Y.

    A plane model of a granular system made out of interconnected disks is treated as a multibody system with variable topology and one-sided constraints between the disks. The motion of such a system is governed by a set of nonlinear algebraic and differential equations. In the paper two formalisms (Lagrangian and Newton-Euler) and two solvers (Runge-Kutta and iterative) are discussed. It is shown numerically that a combination of the Newton-Euler formalism and an iterative method allows to maintain the accuracy of the fourth order Runge-Kutta solver while reducing substantially the CPU time. The accuracy and efficiency are achieved by integrating the error control into the iterative process. Two levels of error control are introduced: one, based on satisfying the position, velocity and acceleration constraints, and another, on satisfying the energy conservation requirement. An adaptive time step based on the rate of convergence at the previous time step is introduced which also allows to reduce the simulation time. The efficiency and accuracy is investigated on a physically unstable vertical stack of disks and on multibody pendulums with 50, 100, 150 and 240 masses. An application to the problem of jamming in a two-phase flow is presented.

  8. A Simulation Approach for Performance Validation during Embedded Systems Design (United States)

    Wang, Zhonglei; Haberl, Wolfgang; Herkersdorf, Andreas; Wechs, Martin

    Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.

  9. Laser-cooling simulation based on the semiclassical approach

    NARCIS (Netherlands)

    Smeets, B.; Herfst, R.W.; te Sligte, E.; van der Straten, P.; Beijerinck, H.C.W.; van Leeuwen, K.A.H.


    We investigate the region of validity of the semiclassical approach to simulating laser cooling. We conclude that for the commonly used pi(x) pi(y) polarization-gradient configuration, the semiclassical approach is valid only for transitions with recoil parameters epsilon(r), on the order of 10(-4)

  10. Influence of the spectral distribution of light on the characteristics of photovoltaic panel. Comparison between simulation and experimental (United States)

    Chadel, Meriem; Bouzaki, Mohammed Moustafa; Chadel, Asma; Petit, Pierre; Sawicki, Jean-Paul; Aillerie, Michel; Benyoucef, Boumediene


    We present and analyze experimental results obtained with a laboratory setup based on a hardware and smart instrumentation for the complete study of performance of PV panels using for illumination an artificial radiation source (Halogen lamps). Associated to an accurate analysis, this global experimental procedure allows the determination of effective performance under standard conditions thanks to a simulation process originally developed under Matlab software environment. The uniformity of the irradiated surface was checked by simulation of the light field. We studied the response of standard commercial photovoltaic panels under enlightenment measured by a spectrometer with different spectra for two sources, halogen lamps and sunlight. Then, we bring a special attention to the influence of the spectral distribution of light on the characteristics of photovoltaic panel, that we have performed as a function of temperature and for different illuminations with dedicated measurements and studies of the open circuit voltage and short-circuit current.

  11. Computer simulation of moiré waves in autostereoscopic displays basing on spectral trajectories (United States)

    Saveljev, V.; Kim, S.-K.


    The moiré effect is an optical phenomenon which has a negative influence to the image quality; as such, this effect should be avoided or minimized in displays, especially in autostereoscoipic three-dimensional ones. The structure of the multiview autostereoscoipic displays typically includes two parallel layers with an integer ratio between the cell sizes. In order to provide the minimization of the moiré effect at finite distances, we developed a theory and computer simulation tool which simulates the behavior of the visible moiré waves in a range of parameters (the displacement of an observer, the distance to the screen and the like). Previously, we have made simulation for the sinusoidal waves; however this was not enough to simulate all real-life situations. Recently, the theory was improved and the non-sinusoidal gratings are currently included as well. Correspondingly, the simulation tool is essentially updated. In simulation, parameters of the resulting moiré waves are measured semi-automatically. The advanced theory accompanied by renewed simulation tool ensures the minimization and make it convenient. The tool run in two modes, overview and detailed, and can be controlled in an interactive manner. The computer simulation and physical experiment confirm the theory. The typical normalized RMS deviation is 3 - 5%.

  12. Site effects in Port-au-Prince (Haiti) from the analysis of spectral ratio and numerical simulations. (United States)

    St. Fleur, Sadrac; Bertrand, Etienne; Courboulex, Francoise; Mercier de Lépinay, Bernard; Deschamps, Anne; Hough, Susan E.; Cultrera, Giovanna; Boisson, Dominique; Prepetit, Claude


    To provide better insight into seismic ground motion in the Port‐au‐Prince metropolitan area, we investigate site effects at 12 seismological stations by analyzing 78 earthquakes with magnitude smaller than 5 that occurred between 2010 and 2013. Horizontal‐to‐vertical spectral ratio on earthquake recordings and a standard spectral ratio were applied to the seismic data. We also propose a simplified lithostratigraphic map and use available geotechnical and geophysical data to construct representative soil columns in the vicinity of each station that allow us to compute numerical transfer functions using 1D simulations. At most of the studied sites, spectral ratios are characterized by weak‐motion amplification at frequencies above 5 Hz, in good agreement with the numerical transfer functions. A mismatch between the observed amplifications and simulated response at lower frequencies shows that the considered soil columns could be missing a deeper velocity contrast. Furthermore, strong amplification between 2 and 10 Hz linked to local topographic features is found at one station located in the south of the city, and substantial amplification below 5 Hz is detected near the coastline, which we attribute to deep and soft sediments as well as the presence of surface waves. We conclude that for most investigated sites in Port‐au‐Prince, seismic amplifications due to site effects are highly variable but seem not to be important at high frequencies. At some specific locations, however, they could strongly enhance the low‐frequency content of the seismic ground shaking. Although our analysis does not consider nonlinear effects, we thus conclude that, apart from sites close to the coast, sediment‐induced amplification probably had only a minor impact on the level of strong ground motion, and was not the main reason for the high level of damage in Port‐au‐Prince.

  13. Modeling and inversion of the microtremor H/ V spectral ratio: physical basis behind the diffuse field approach (United States)

    Sánchez-Sesma, Francisco J.


    Microtremor H/ V spectral ratio (MHVSR) has gained popularity to assess the dominant frequency of soil sites. It requires measurement of ground motion due to seismic ambient noise at a site and a relatively simple processing. Theory asserts that the ensemble average of the autocorrelation of motion components belonging to a diffuse field at a given receiver gives the directional energy densities (DEDs) which are proportional to the imaginary parts of the Green's function components when both source and receiver are the same point and the directions of force and response coincide. Therefore, the MHVSR can be modeled as the square root of 2 × Im G 11/Im G 33, where Im G 11 and Im G 33 are the imaginary parts of Green's functions at the load point for the horizontal (sub-index 1) and vertical (sub-index 3) components, respectively. This connection has physical implications that emerge from the duality DED force and allows understanding the behavior of the MHVSR. For a given model, the imaginary parts of the Green's functions are integrals along a radial wavenumber. To deal with these integrals, we have used either the popular discrete wavenumber method or the Cauchy's residue theorem at the poles that account for surface waves normal modes giving the contributions due to Rayleigh and Love waves. For the retrieval of the velocity structure, one can minimize the weighted differences between observations and calculated values using the strategy of an inversion scheme. In this research, we used simulated annealing but other optimization techniques can be used as well. This last approach allows computing separately the contributions of different wave types. An example is presented for the mouth of Andarax River at Almería, Spain. [Figure not available: see fulltext.

  14. Probabilistic modeling and global sensitivity analysis for CO 2 storage in geological formations: a spectral approach

    KAUST Repository

    Saad, Bilal Mohammed


    This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.

  15. D1+ Simulator: A cost and risk optimized approach to nuclear power plant simulator modernization

    International Nuclear Information System (INIS)

    Wischert, W.


    D1-Simulator is operated by Kraftwerks-Simulator-Gesellschaft (KSG) and Gesellschaft f?r Simulatorschulung (GfS) at the Simulator Centre in Essen since 1977. The full-scope control room training simulator, used for Kernkraftwerk Biblis (KWB) is based on a PDP-11 hardware platform and is mainly programmed in ASSEMBLER language. The Simulator has reached a continuous high availability of operation throughout the years due to specialized hardware and software support from KSG maintenance team. Nevertheless, D1-Simulator largely reveals limitations with respect to computer capacity and spares and suffers progressively from the non-availability of hardware replacement materials. In order to ensure long term maintainability within the framework of the consensus on nuclear energy, a 2-years refurbishing program has been launched by KWB focusing on quality and budgetary aspects. The so-called D1+ Simulator project is based on the re-use of validated data from existing simulators. Allowing for flexible project management methods, the project outlines a cost and risk optimized approach to Nuclear Power Plant (NPP) Simulator modernization. D1+ Simulator is being built by KSG/GfS in close collaboration with KWB and the simulator vendor THALES by re-using a modern hardware and software development environment from D56-Simulator, used by Kernkraftwerk Obrigheim (KWO) before its decommissioning in 2005. The Simulator project, launched in 2004, is expected to be completed by end of 2006. (author)

  16. Turbulence statistics in a spectral element code: a toolbox for High-Fidelity Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Vinuesa, Ricardo [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden); Fick, Lambert [Argonne National Lab. (ANL), Argonne, IL (United States); Negi, Prabal [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden); Marin, Oana [Argonne National Lab. (ANL), Argonne, IL (United States); Merzari, Elia [Argonne National Lab. (ANL), Argonne, IL (United States); Schlatter, Phillip [KTH Mechanics, Stockholm (Sweden); Swedish e-Science Research Center (SeRC), Stockholm (Sweden)


    In the present document we describe a toolbox for the spectral-element code Nek5000, aimed at computing turbulence statistics. The toolbox is presented for a small test case, namely a square duct with Lx = 2h, Ly = 2h and Lz = 4h, where x, y and z are the horizontal, vertical and streamwise directions, respectively. The number of elements in the xy-plane is 16 X 16 = 256, and the number of elements in z is 4, leading to a total of 1,204 spectral elements. A polynomial order of N = 5 is chosen, and the mesh is generated using the Nek5000 tool genbox. The toolbox presented here allows to compute mean-velocity components, the Reynolds-stress tensor as well as turbulent kinetic energy (TKE) and Reynolds-stress budgets. Note that the present toolbox allows to compute turbulence statistics in turbulent flows with one homogeneous direction (where the statistics are based on time-averaging as well as averaging in the homogeneous direction), as well as in fully three-dimensional flows (with no periodic directions, where only time-averaging is considered).

  17. Energy transfers and spectral eddy viscosity in large-eddy simulations of homogeneous isotropic turbulence : Comparison of dynamic Smagorinsky and multiscale models over a range of discretizations

    NARCIS (Netherlands)

    Hughes, T.J.R.; Wells, G.N.; Wray, A.A.


    Energy transfers within large-eddy simulation (LES) and direct numerical simulation (DNS) grids are studied. The spectral eddy viscosity for conventional dynamic Smagorinsky and variational multiscale LES methods are compared with DNS results. Both models underestimate the DNS results for a very

  18. GNSS-based Observations and Simulations of Spectral Scintillation Indices in the Arctic Ionosphere

    DEFF Research Database (Denmark)

    Durgonics, Tibor; Hoeg, Per; von Benzon, Hans-Henrik

    , and development of data-driven methodologies to accurately localize ionospheric irregularities and simulate GNSS scintillation signals are highly desired. Ionospheric scintillations have traditionally been quantified by amplitude (S4) and phase scintillations (σφ). Our study focuses on the Arctic, where...... scintillations, especially phase scintillations, are prominent. We will present observations acquired from a network of Greenlandic GNSS stations, including 2D amplitude and phase scintillation index maps for representative calm and storm periods. In addition to the traditional indices described above, we....... The observations will then be compared to properties of simulated GNSS signals computed by the Fast Scintillation Mode (FSM). The FSM was developed to simulate ionospheric scintillations under different geophysical conditions, and is used to simulate GNSS signals with known scintillation characteristics...

  19. Adding Value in Construction Design Management by Using Simulation Approach


    Doloi, Hemanta


    Simulation modelling has been introduced as a decision support tool for front end planning and design analysis of projects. An integrated approach has been discussed linking project scope, end product or project facility performance and the strategic project objectives at the early stage of projects. The case study example on tram network demonstrates that application of simulation helps assessing performance of project operation and making appropriate investment decisions over life cycle of ...

  20. Fleet Sizing of Automated Material Handling Using Simulation Approach (United States)

    Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny


    Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software

  1. Recent developments in the super transition array model for spectral simulation of LTE plasmas

    International Nuclear Information System (INIS)

    Bar-Shalom, A.; Oreg, J.; Goldstein, W.H.


    Recently developed sub-picosecond pulse lasers have been used to create hot, near solid density plasmas. Since these plasmas are nearly in local thermodynamic equilibrium (LTE), their emission spectra involve a huge number of populated configurations. A typical spectrum is a combination of many unresolved clusters of emission, each containing an immense number of overlapping, unresolvable bound-bound and bound-free transitions. Under LTE, or near LTE conditions, traditional detailed configuration or detailed term spectroscopic models are not capable of handling the vast number of transitions involved. The average atom (AA) model, on the other hand, accounts for all relevant transitions, but in an oversimplified fashion that ignores all spectral structure. The Super Transition Array (STA) model, which has been developed in recent years, combines the simplicity and comprehensiveness of the AA model with the accuracy of detailed term accounting. The resolvable structure of spectral clusters is revealed by successively increasing the number of distinct STA's, until convergence is attained. The limit of this procedure is a detailed unresolved transition array (UTA) spectrum, with a term-broadened line for each accessible configuration-to-configuration transition, weighted by the relevant Boltzman population. In practice, this UTA spectrum is actually obtained using only a few thousand to tens of thousands of STA's (as opposed, typically, to billions of UTAs). The central result of STA theory is a set of formulas for the moments (total intensity, average transition energy, variance) of an STA. In calculating the moments, detailed relativistic first order quantum transition energies and probabilities are used. The energy appearing in the Boltzman factor associated with each level in a superconfiguration is the zero order result corrected by a superconfiguration averaged first order correction. Examples and application to recent measurements are presented

  2. A low-power photonic quantization approach using OFDM subcarrier spectral shifts. (United States)

    Kodama, Takahiro; Morita, Koji; Cincotti, Gabriella; Kitayama, Ken-Ichi


    Photonic analog-to-digital conversion and optical quantization are demonstrated, based on the spectral shifts of orthogonal frequency division multiplexing subcarriers and a frequency-packed arrayed waveguide grating. The system is extremely low-energy consuming since the spectral shifts are small and generated by cross-phase modulation, using a linear-slope high-speed and low-jitter pulse train generated by a mode locked laser diode. The feasibility of a 2, 3 and 4-bit optical quantization scheme is demonstrated.

  3. A Stigmergy Approach for Open Source Software Developer Community Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Beaver, Justin M [ORNL; Potok, Thomas E [ORNL; Pullum, Laura L [ORNL; Treadwell, Jim N [ORNL


    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agent behaviors selection probability.

  4. A conservative approach to parallelizing the Sharks World simulation (United States)

    Nicol, David M.; Riffe, Scott E.


    Parallelizing a benchmark problem for parallel simulation, the Sharks World, is described. The described solution is conservative, in the sense that no state information is saved, and no 'rollbacks' occur. The used approach illustrates both the principal advantage and principal disadvantage of conservative parallel simulation. The advantage is that by exploiting lookahead an approach was found that dramatically improves the serial execution time, and also achieves excellent speedups. The disadvantage is that if the model rules are changed in such a way that the lookahead is destroyed, it is difficult to modify the solution to accommodate the changes.

  5. An Integrated Approach for Entry Mission Design and Flight Simulations (United States)

    Lu, Ping; Rao, Prabhakara


    An integrated approach for entry trajectory design, guidance, and simulation is proposed. The key ingredients for this approach are an on-line 3 degree-of-freedom entry trajectory planning algorithm and the entry guidance algorithm that generates the guidance gains automatically. When fully developed, such a tool could enable end-bend entry mission design and simulations in 3DOF and 6DOF mode from de-orbit burn to the TAEM interface and beyond, all in one key stroke. Some preliminary examples of such a capability are presented in this paper that demonstrate the potential of this type of integrated environment.

  6. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion (United States)

    Supino, M.; Festa, G.; Zollo, A.


    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  7. Numerical Simulations of Kinetic Alfvén Waves to Study Spectral ...

    Indian Academy of Sciences (India)


    Jan 27, 2016 ... We present numerical simulations of the modified nonlinear Schrödinger equation satisfied by kinetic Alfvén waves (KAWs) leading to the formation of magnetic filaments at different times. The relevance of these filamentary structures to solar wind turbulence and particle heating has also been pointed out.

  8. 3d Approach Of Spectral Response For A Bifacial Silicon Solar Cell ...

    African Journals Online (AJOL)

    Losses in emitter region and external magnetic field are also being taken into account in order to perfect the description of measured spectral response. Then the new analytical expressions of carrier, photocurrent and short circuit densities are produced for front side and rear side illuminations. Homemade software based ...

  9. Spectral Approach to Derive the Representation Formulae for Solutions of the Wave Equation

    Directory of Open Access Journals (Sweden)

    Gusein Sh. Guseinov


    Full Text Available Using spectral properties of the Laplace operator and some structural formula for rapidly decreasing functions of the Laplace operator, we offer a novel method to derive explicit formulae for solutions to the Cauchy problem for classical wave equation in arbitrary dimensions. Among them are the well-known d'Alembert, Poisson, and Kirchhoff representation formulae in low space dimensions.

  10. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system. (United States)

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut


    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  11. Power spectral density analysis of wind-shear turbulence for related flight simulations. M.S. Thesis (United States)

    Laituri, Tony R.


    Meteorological phenomena known as microbursts can produce abrupt changes in wind direction and/or speed over a very short distance in the atmosphere. These changes in flow characteristics have been labelled wind shear. Because of its adverse effects on aerodynamic lift, wind shear poses its most immediate threat to flight operations at low altitudes. The number of recent commercial aircraft accidents attributed to wind shear has necessitated a better understanding of how energy is transferred to an aircraft from wind-shear turbulence. Isotropic turbulence here serves as the basis of comparison for the anisotropic turbulence which exists in the low-altitude wind shear. The related question of how isotropic turbulence scales in a wind shear is addressed from the perspective of power spectral density (psd). The role of the psd in related Monte Carlo simulations is also considered.

  12. Numerical simulation of white double-layer coating with different submicron particles on the spectral reflectance

    International Nuclear Information System (INIS)

    Chai, Jiale; Cheng, Qiang; Si, Mengting; Su, Yang; Zhou, Yifan; Song, Jinlin


    The spectral selective coating is becoming more and more popular against solar irradiation not only in keeping the coated objects stay cool but also retain the appearance of the objects by reducing the glare of reflected sunlight. In this work a numerical study is investigated to design the double-layer coating with different submicron particles to achieve better performance both in thermal and aesthetic aspects. By comparison, the performance of double-layer coating with TiO 2 and ZnO particles is better than that with single particles. What's more, the particle diameter, volume fraction of particle as well as substrate condition is also investigated. The results show that an optimized double-layer coating with particles should be the one with an appropriate particle diameter, volume fraction and the black substrate. - Highlights: • The double-layer coating has a great influence on both thermal and aesthetic aspects. • The double-layer coating performs better than the uniform one with single particles. • The volume fraction, particle diameter and substrate conditions are optimized.

  13. Spectral features of lightning-induced ion cyclotron waves at low latitudes: DEMETER observations and simulation

    Czech Academy of Sciences Publication Activity Database

    Shklyar, D. R.; Storey, L. R. O.; Chum, Jaroslav; Jiříček, František; Němec, F.; Parrot, M.; Santolík, Ondřej; Titova, E. E.


    Roč. 117, A12 (2012), A12206/1-A12206/16 ISSN 0148-0227 R&D Projects: GA ČR GA205/09/1253; GA ČR GAP205/10/2279; GA MŠk ME09107 Grant - others:GA ČR(CZ) GPP209/12/P658 Program:GP Institutional support: RVO:68378289 Keywords : Plasma waves analysis * ion cyclotron waves * satellite observation and numerical simulation * geometrical optics * multi-component measurements * simulation * spectrogram * wave propagation Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 3.174, year: 2012

  14. Pedagogical Approaches to Teaching with Computer Simulations in Science Education

    NARCIS (Netherlands)

    Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael


    For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is

  15. Simulated annealing approach for solving economic load dispatch ...

    African Journals Online (AJOL)


    Abstract. This paper presents Simulated Annealing (SA) algorithm for optimization inspired by the process of annealing in ... Various classical optimization techniques were used to solve the ELD problem, for example: lambda iteration approach, ...... Research of fuzzy self-adaptive immune algorithm and its application.

  16. Pilot performance evaluation of simulated flight approach and ...

    Indian Academy of Sciences (India)

    This research work examines the application of different statistical and empirical analysis methods to quantify pilot performance. A realistic approach and landing flight scenario is executed using the reconfigurable flight simulator at National Aerospace Laboratories and both subjective and quantitative measures are applied ...

  17. An evolutionary approach to simulated football free kick optimisation (United States)

    Rhodes, Martin; Coupland, Simon

    We present a genetic algorithm-based evolutionary computing approach to the optimisation of simulated football free kick situations. A detailed physics model is implemented in order to apply evolutionary computing techniques to the creation of strategic offensive shots and defensive player locations.

  18. Numerical Simulation of Unsteady Compressible Flow in Convergent Channel: Pressure Spectral Analysis

    Czech Academy of Sciences Publication Activity Database

    Pořízková, P.; Kozel, Karel; Horáček, Jaromír


    Roč. 2012, č. 545120 (2012), s. 1-9 ISSN 1110-757X R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite volume method * simulation of flow in vibrating glottis * biomechanics of voice Subject RIV: BI - Acoustics Impact factor: 0.834, year: 2012

  19. Global Monitoring of Terrestrial Chlorophyll Fluorescence from Moderate-spectral-resolution Near-infrared Satellite Measurements: Methodology, Simulations, and Application to GOME-2 (United States)

    Joiner, J.; Gaunter, L.; Lindstrot, R.; Voigt, M.; Vasilkov, A. P.; Middleton, E. M.; Huemmrich, K. F.; Yoshida, Y.; Frankenberg, C.


    Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2). The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT). GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0.5 deg × 0.5 deg

  20. Nonlinear dimensionality reduction in molecular simulation: The diffusion map approach (United States)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Kevrekidis, Ioannis G.; Debenedetti, Pablo G.


    Molecular simulation is an important and ubiquitous tool in the study of microscopic phenomena in fields as diverse as materials science, protein folding and drug design. While the atomic-level resolution provides unparalleled detail, it can be non-trivial to extract the important motions underlying simulations of complex systems containing many degrees of freedom. The diffusion map is a nonlinear dimensionality reduction technique with the capacity to systematically extract the essential dynamical modes of high-dimensional simulation trajectories, furnishing a kinetically meaningful low-dimensional framework with which to develop insight and understanding of the underlying dynamics and thermodynamics. We survey the potential of this approach in the field of molecular simulation, consider its challenges, and discuss its underlying concepts and means of application. We provide examples drawn from our own work on the hydrophobic collapse mechanism of n-alkane chains, folding pathways of an antimicrobial peptide, and the dynamics of a driven interface.

  1. Computational studies of first-Born scattering cross sections. I - Spectral properties of Bethe surfaces. II - Moment-theory approach (United States)

    Margoliash, D. J.; Langhoff, P. W.


    The present investigation is concerned with the spectral properties of Bethe surfaces to establish a basis for the formulation of alternatives to the conventional computational approach. The relevant scattering cross sections and closely related Van Hove autocorrelation functions are identified as spectral (Riemann-Stieltjes) integral properties of the corresponding atomic and molecular Bethe surfaces. Evaluation of these properties for hydrogenic targets provides a basis for clarifying the ranges of validity of the static, binary-encounter, and sum-rule approximations to differential and total inelastic cross sections generally employed in place of the correct Born results. A description is provided of moment-theory methods for calculations of the high-energy electron impact-excitation and -ionization cross sections and closely related Van Hove correlation functions of atomic and molecular targets. Attention is given to aspects of the Chebyshev-Stieltjes-Markoff moment theory and the Stieltjes and Chebyshev derivatives.

  2. Simulation study of the aerosol information content in OMI spectral reflectance measurements

    Directory of Open Access Journals (Sweden)

    B. Veihelmann


    Full Text Available The Ozone Monitoring Instrument (OMI is an imaging UV-VIS solar backscatter spectrometer and is designed and used primarily to retrieve trace gases like O3 and NO2 from the measured Earth reflectance spectrum in the UV-visible (270–500 nm. However, also aerosols are an important science target of OMI. The multi-wavelength algorithm is used to retrieve aerosol parameters from OMI spectral reflectance measurements in up to 20 wavelength bands. A Principal Component Analysis (PCA is performed to quantify the information content of OMI reflectance measurements on aerosols and to assess the capability of the multi-wavelength algorithm to discern various aerosol types. This analysis is applied to synthetic reflectance measurements for desert dust, biomass burning aerosols, and weakly absorbing anthropogenic aerosol with a variety of aerosol optical thicknesses, aerosol layer altitudes, refractive indices and size distributions. The range of aerosol parameters considered covers the natural variability of tropospheric aerosols. This theoretical analysis is performed for a large number of scenarios with various geometries and surface albedo spectra for ocean, soil and vegetation. When the surface albedo spectrum is accurately known and clouds are absent, OMI reflectance measurements have 2 to 4 degrees of freedom that can be attributed to aerosol parameters. This information content depends on the observation geometry and the surface albedo spectrum. An additional wavelength band is evaluated, that comprises the O2-O2 absorption band at a wavelength of 477 nm. It is found that this wavelength band adds significantly more information than any other individual band.

  3. Software as a service approach to sensor simulation software deployment (United States)

    Webster, Steven; Miller, Gordon; Mayott, Gregory


    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  4. In Situ Raman Spectral Characteristics of Carbon Dioxide in a Deep-Sea Simulator of Extreme Environments Reaching 300 ℃ and 30 MPa. (United States)

    Li, Lianfu; Du, Zengfeng; Zhang, Xin; Xi, Shichuan; Wang, Bing; Luan, Zhendong; Lian, Chao; Yan, Jun


    Deep-sea carbon dioxide (CO 2 ) plays a significant role in the global carbon cycle and directly affects the living environment of marine organisms. In situ Raman detection technology is an effective approach to study the behavior of deep-sea CO 2 . However, the Raman spectral characteristics of CO 2 can be affected by the environment, thus restricting the phase identification and quantitative analysis of CO 2 . In order to study the Raman spectral characteristics of CO 2 in extreme environments (up to 300 ℃ and 30 MPa), which cover most regions of hydrothermal vents and cold seeps around the world, a deep-sea extreme environment simulator was developed. The Raman spectra of CO 2 in different phases were obtained with Raman insertion probe (RiP) system, which was also used in in situ Raman detection in the deep sea carried by remotely operated vehicle (ROV) "Faxian". The Raman frequency shifts and bandwidths of gaseous, liquid, solid, and supercritical CO 2 and the CO 2 -H 2 O system were determined with the simulator. In our experiments (0-300 ℃ and 0-30 MPa), the peak positions of the symmetric stretching modes of gaseous CO 2, liquid CO 2 , and supercritical CO 2 shift approximately 0.6 cm -1 (1387.8-1388.4 cm -1 ), 0.7 cm -1 (1385.5-1386.2 cm -1 ), and 2.5 cm -1 (1385.7-1388.2 cm -1 ), and those of the bending modes shift about 1.0 cm -1 (1284.7-1285.7 cm -1 ), 1.9 cm -1 (1280.1-1282.0 cm -1 ), and 4.4 cm -1 (1281.0-1285.4 cm -1 ), respectively. The Raman spectral characteristics of the CO 2 -H 2 O system were also studied under the same conditions. The peak positions of dissolved CO 2 varied approximately 4.5 cm -1 (1282.5-1287.0 cm -1 ) and 2.4 cm -1 (1274.4-1276.8 cm -1 ) for each peak. In comparison with our experiment results, the phases of CO 2 in extreme conditions (0-3000 m and 0-300 ℃) can be identified with the Raman spectra collected in situ. This qualitative research on CO 2 can also support the

  5. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf


    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  6. Optimized 1d-1v Vlasov-Poisson simulations using Fourier- Hermite spectral discretizations (United States)

    Schumer, Joseph Wade


    A 1d-1v spatially-periodic, Maxwellian-like, charged particle phase-space distribution f(x, v, t) is represented by one of two different Fourier-Hermite basis sets (asymmetric or symmetric Hermite normalization) and evolved with a similarly transformed and filtered Vlasov- Poisson set of equations. The set of coefficients fαmn(t) are advanced through time with an O(/Delta t2)-accurate splitting method,1 using a O(/Delta t4) Runge-Kutta time advancement scheme on the v∂xf and E∂vf terms separately, between which the self-consistent electric field is calculated. This method improves upon that of previous works by the combined use of two optimization techniques: exact Gaussian filtering2 and variable velocity-scaled3 Hermite basis functions.4 The filter width, vo, reduces the error introduced by the finite computational system, yet does not alter the low-order velocity modes; therefore, the self-consistent fields are not affected by the filtering. In addition, a variable velocity scale length U is introduced into the Hermite basis functions to provide improved spectral accuracy, yielding orders of magnitude reduction in the L2-norm error.5 The asymmetric Hermite algorithm conserves particles and momentum exactly, and total energy in the limit of continuous time. However, this method does not conserve the Casimir [/int/int] f2dxdu, and is, in fact, numerically unstable. The symmetric Hermite algorithm can either conserve particles and energy or momentum (in the limit of continuous time), depending on the parity of the highest-order Hermite function. Its conservation properties improve greatly with the use of velocity filtering. Also, the symmetric Hermite method conserves [/int/int] f2dxdu and, therefore, remains numerically stable. Relative errors with respect to linear Landau damping and linear bump-on-tail instability are shown to be less than 1% (orders of magnitude lower than those found in comparable Fourier-Fourier and PIC schemes). Varying the Hermite

  7. Learning from physics-based earthquake simulators: a minimal approach (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele


    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  8. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation (United States)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa


    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  9. Analog approach to mixed analog-digital circuit simulation (United States)

    Ogrodzki, Jan


    Logic simulation of digital circuits is a well explored research area. Most up-to-date CAD tools for digital circuits simulation use an event driven, selective trace algorithm and Hardware Description Languages (HDL), e.g. the VHDL. This techniques enable simulation of mixed circuits, as well, where an analog part is connected to the digital one through D/A and A/D converters. The event-driven mixed simulation applies a unified, digital-circuits dedicated method to both digital and analog subsystems. In recent years HDL techniques have been also applied to mixed domains, as e.g. in the VHDL-AMS. This paper presents an approach dual to the event-driven one, where an analog part together with a digital one and with converters is treated as the analog subsystem and is simulated by means of circuit simulation techniques. In our problem an analog solver used yields some numerical problems caused by nonlinearities of digital elements. Efficient methods for overriding these difficulties have been proposed.

  10. Evaluation of the methodologies used to generate random pavement profiles based on the power spectral density: An approach based on the International Roughness Index

    Directory of Open Access Journals (Sweden)

    Boris Jesús Goenaga


    Full Text Available The pavement roughness is the main variable that produces the vertical excitation in vehicles. Pavement profiles are the main determinant of (i discomfort perception on users and (ii dynamic loads generated at the tire-pavement interface, hence its evaluation constitutes an essential step on a Pavement Management System. The present document evaluates two specific techniques used to simulate pavement profiles; these are the shaping filter and the sinusoidal approach, both based on the Power Spectral Density. Pavement roughness was evaluated using the International Roughness Index (IRI, which represents the most used index to characterize longitudinal road profiles. Appropriate parameters were defined in the simulation process to obtain pavement profiles with specific ranges of IRI values using both simulation techniques. The results suggest that using a sinusoidal approach one can generate random profiles with IRI values that are representative of different road types, therefore, one could generate a profile for a paved or an unpaved road, representing all the proposed categories defined by ISO 8608 standard. On the other hand, to obtain similar results using the shaping filter approximation a modification in the simulation parameters is necessary. The new proposed values allow one to generate pavement profiles with high levels of roughness, covering a wider range of surface types. Finally, the results of the current investigation could be used to further improve our understanding on the effect of pavement roughness on tire pavement interaction. The evaluated methodologies could be used to generate random profiles with specific levels of roughness to assess its effect on dynamic loads generated at the tire-pavement interface and user’s perception of road condition.

  11. Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing (United States)

    Cox, Cary M.

    This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work

  12. A spectral approach to compute the mean performance measures of the queue with low-order BMAP input

    Directory of Open Access Journals (Sweden)

    Ho Woo Lee


    Full Text Available This paper targets engineers and practitioners who want a simple procedure to compute the mean performance measures of the Batch Markovian Arrival process (BMAP/G/1 queueing system when the parameter matrices order is very low. We develop a set of system equations and derive the vector generating function of the queue length. Starting from the generating function, we propose a spectral approach that can be understandable to those who have basic knowledge of M/G/1 queues and eigenvalue algebra.

  13. Multimode simulations of a wide field of view double-Fourier far-infrared spatio-spectral interferometer (United States)

    Bracken, Colm P.; Lightfoot, John; O'Sullivan, Creidhe; Murphy, J. Anthony; Donohoe, Anthony; Savini, Giorgio; Juanola-Parramon, Roser; The Fisica Consortium, On Behalf Of


    In the absence of 50-m class space-based observatories, subarcsecond astronomy spanning the full far-infrared wavelength range will require space-based long-baseline interferometry. The long baselines of up to tens of meters are necessary to achieve subarcsecond resolution demanded by science goals. Also, practical observing times command a field of view toward an arcminute (1‧) or so, not achievable with a single on-axis coherent detector. This paper is concerned with an application of an end-to-end instrument simulator PyFIInS, developed as part of the FISICA project under funding from the European Commission's seventh Framework Programme for Research and Technological Development (FP7). Predicted results of wide field of view spatio-spectral interferometry through simulations of a long-baseline, double-Fourier, far-infrared interferometer concept are presented and analyzed. It is shown how such an interferometer, illuminated by a multimode detector can recover a large field of view at subarcsecond angular resolution, resulting in similar image quality as that achieved by illuminating the system with an array of coherent detectors. Through careful analysis, the importance of accounting for the correct number of higher-order optical modes is demonstrated, as well as accounting for both orthogonal polarizations. Given that it is very difficult to manufacture waveguide and feed structures at sub-mm wavelengths, the larger multimode design is recommended over the array of smaller single mode detectors. A brief note is provided in the conclusion of this paper addressing a more elegant solution to modeling far-infrared interferometers, which holds promise for improving the computational efficiency of the simulations presented here.

  14. Spectral analysis of forecast error investigated with an observing system simulation experiment

    Directory of Open Access Journals (Sweden)

    Nikki C. Privé


    Full Text Available The spectra of analysis and forecast error are examined using the observing system simulation experiment framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office. A global numerical weather prediction model, the Global Earth Observing System version 5 with Gridpoint Statistical Interpolation data assimilation, is cycled for 2 months with once-daily forecasts to 336 hours to generate a Control case. Verification of forecast errors using the nature run (NR as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self-analysis verification significantly overestimates the error growth rates of the early forecast, as well as mis-characterising the spatial scales at which the strongest growth occurs. The NR-verified error variances exhibit a complicated progression of growth, particularly for low wavenumber errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realisation of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  15. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment (United States)

    Prive, N. C.; Errico, Ronald M.


    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  16. Alternative approaches to baseline estimation using calibrated simulations

    Energy Technology Data Exchange (ETDEWEB)

    Schuldt, M.A.; Romberger, J.S. [SBW Consulting, Inc., Bellevue, WA (United States)


    Among the most accurate methods for estimating energy savings realized from a conservation measure is the use of a DOE-2 (or equivalent) simulation that is calibrated to hourly end-use load data. Several alternative approaches are available for estimating savings with calibrated simulations. Their application varies with factors such as the objective of the analysis, complexity of the energy systems and measures encountered, the amount of available end-use consumption data, construction type (new or retrofit), and available resources. This paper discusses three of these methods. The test/reference method is most commonly used for new construction. It requires the use of paired buildings, a test building that contains the conservation measures, and a reference building that does not. The before/after method is relevant only to retrofit construction. It requires only a test building that serves as its own reference. Separate pre-period and post-period models are prepared to reflect conditions before and after the conservation measures were installed. The measure removal method is useful for both new and retrofit construction. It involves a series of sensitivity runs with a post-period simulation to remove the effect of the installed measures. This paper discusses the advantages and disadvantages of three alternative approaches to savings estimation, their proper application, and actual field experience with them in commercial and multifamily settings. This paper discusses the value of hourly end-use load data and calibrated simulations to each approach and the implications of baseline selection on the resulting savings estimates. This paper also includes a discussion of the importance of corrections for pre/post changes that are unrelated to the measures and the advantages that calibrated simulations bring to making these corrections.

  17. Statistical Analysis of Hyper-Spectral Data: A Non-Gaussian Approach

    Directory of Open Access Journals (Sweden)

    M. Diani


    Full Text Available We investigate the statistical modeling of hyper-spectral data. The accurate modeling of experimental data is critical in target detection and classification applications. In fact, having a statistical model that is capable of properly describing data variability leads to the derivation of the best decision strategies together with a reliable assessment of algorithm performance. Most existing classification and target detection algorithms are based on the multivariate Gaussian model which, in many cases, deviates from the true statistical behavior of hyper-spectral data. This motivated us to investigate the capability of non-Gaussian models to represent data variability in each background class. In particular, we refer to models based on elliptically contoured (EC distributions. We consider multivariate EC-t distribution and two distinct mixture models based on EC distributions. We describe the methodology adopted for the statistical analysis and we propose a technique to automatically estimate the unknown parameters of statistical models. Finally, we discuss the results obtained by analyzing data gathered by the multispectral infrared and visible imaging spectrometer (MIVIS sensor.

  18. A New Statistical Approach to the Optical Spectral Variability in Blazars

    Directory of Open Access Journals (Sweden)

    Jose A. Acosta-Pulido


    Full Text Available We present a spectral variability study of a sample of about 25 bright blazars, based on optical spectroscopy. Observations cover the period from the end of 2008 to mid 2015, with an approximately monthly cadence. Emission lines have been identified and measured in the spectra, which permits us to classify the sources into BL Lac-type or FSRQs, according to the commonly used EW limit. We have obtained synthetic photometry and produced colour-magnitude diagrams which show different trends associated with the object classes: generally, BL Lacs tend to become bluer when brighter and FSRQs become redder when brighter, although several objects exhibit both trends, depending on brightness. We have also applied a pattern recognition algorithm to obtain the minimum number of physical components which can explain the variability of the optical spectrum. We have used NMF (Non-Negative Matrix Factorization instead of PCA (Principal Component Analysis to avoid un-realistic negative components. For most targets we found that 2 or 3 meta-components are enough to explain the observed spectral variability.

  19. A Novel Approach to Thermal Design of Solar Modules: Selective-Spectral and Radiative Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xingshu; Dubey, Rajiv; Chattopadhyay, Shashwata; Khan, Mohammad Ryyan; Chavali, Raghu Vamsi; Silverman, Timothy J.; Kottantharayil, Anil; Vasi, Juzer; Alam, Muhammad Ashraful


    For commercial solar modules, up to 80% of the incoming sunlight may be dissipated as heat, potentially raising the temperature 20-30 degrees C higher than the ambient. In the long run, extreme self-heating may erode efficiency and shorten lifetime, thereby, dramatically reducing the total energy output by almost ~10% Therefore, it is critically important to develop effective and practical cooling methods to combat PV self-heating. In this paper, we explore two fundamental sources of PV self-heating, namely, sub-bandgap absorption and imperfect thermal radiation. The analysis suggests that we redesign the optical and thermal properties of the solar module to eliminate the parasitic absorption (selective-spectral cooling) and enhance the thermal emission to the cold cosmos (radiative cooling). The proposed technique should cool the module by ~10 degrees C, to be reflected in significant long-term energy gain (~ 3% to 8% over 25 years) for PV systems under different climatic conditions.

  20. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev


    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  1. The acidic pH-induced structural changes in apo-CP43 by spectral methodologies and molecular dynamics simulations (United States)

    Wang, Wang; Li, Xue; Wang, Qiuying; Zhu, Xixi; Zhang, Qingyan; Du, Linfang


    CP43 is closely associated with the photosystem II and exists the plant thylakoid membranes. The acidic pH-induced structural changes had been investigated by fluorescence spectrum, ANS spectrum, RLS spectrum, energy transfer experiment, acrylamide fluorescence quenching assay and MD simulation. The fluorescence spectrum indicated that the structural changes in acidic pH-induced process were a four-state model, which was nature state (N), partial unfolding state (PU), refolding state (R), and molten-globule state (M), respectively. Analysis of ANS spectrum illustrated that inner hydrophobic core exposed partially to surface below pH 2.0 and inferred also that the molten-globule state existed. The RLS spectrum showed the aggregation of apo-CP43 around the pI (pH 4.5-4.0). The alterations of apo-CP43 secondary structure with different acidic treatments were confirmed by FTIR spectrum. The energy transfer experiment and quenching research demonstrated structural change at pH 4.0 was loosest. The RMSF suggested two terminals played an important function in acidic denaturation process. The distance of two terminals shown slight difference in acidic pH-induced process during the unfolding process, both N-terminal and C-terminal occupied the dominant role. However, the N-terminal accounted for the main part in the refolding process. All kinds of SASA values corresponded to spectral results. The tertiary and secondary structure by MD simulation indicated that the part transmembrane α-helix was destroyed at low pH.

  2. Shower approach in the simulation of ion scattering from solids. (United States)

    Khodyrev, V A; Andrzejewski, R; Rivera, A; Boerma, D O; Prieto, J E


    An efficient approach for the simulation of ion scattering from solids is proposed. For every encountered atom, we take multiple samples of its thermal displacements among those which result in scattering with high probability to finally reach the detector. As a result, the detector is illuminated by intensive "showers," where each event of detection must be weighted according to the actual probability of the atom displacement. The computational cost of such simulation is orders of magnitude lower than in the direct approach, and a comprehensive analysis of multiple and plural scattering effects becomes possible. We use this method for two purposes. First, the accuracy of the approximate approaches, developed mainly for ion-beam structural analysis, is verified. Second, the possibility to reproduce a wide class of experimental conditions is used to analyze some basic features of ion-solid collisions: the role of double violent collisions in low-energy ion scattering; the origin of the "surface peak" in scattering from amorphous samples; the low-energy tail in the energy spectra of scattered medium-energy ions due to plural scattering; and the degradation of blocking patterns in two-dimensional angular distributions with increasing depth of scattering. As an example of simulation for ions of MeV energies, we verify the time reversibility for channeling and blocking of 1-MeV protons in a W crystal. The possibilities of analysis that our approach offers may be very useful for various applications, in particular, for structural analysis with atomic resolution. © 2011 American Physical Society

  3. An Alternative Approach to Simulating an Entire Particle Erosion Experiment

    Directory of Open Access Journals (Sweden)

    Dirk Spaltmann


    Full Text Available Solid particle erosion affects many areas, such as dust or volcanic ash in areo-engines. The development of protective materials and surface engineering is costly and time consuming. A lot of effort has been placed into the advancement of models to speed up this process. Finite element or discrete element-based models are quite successful in predicting single or multiple impacts. However, they reach their limit if an entire erosion experiment is to be simulated. Therefore, in the present work, an approach is presented which combines various aspects of the former models with probability considerations. It is used to simulate the impact of more than one billion Alumina particles onto a steel substrate. This approach permits the simulation of an entire erosion experiment on an average PC (i5-2520M CPU@2.5 GHz processor, 4 GB main memory within about six hours. The respective predictions of wear scar and impact-mass/mass-loss curve are compared to the real experiment.

  4. Coupled multi-physics simulation frameworks for reactor simulation: A bottom-up approach

    International Nuclear Information System (INIS)

    Tautges, Timothy J.; Caceres, Alvaro; Jain, Rajeev; Kim, Hong-Jun; Kraftcheck, Jason A.; Smith, Brandon M.


    A 'bottom-up' approach to multi-physics frameworks is described, where first common interfaces to simulation data are developed, then existing physics modules are adapted to communicate through those interfaces. Physics modules read and write data through those common interfaces, which also provide access to common simulation services like parallel IO, mesh partitioning, etc.. Multi-physics codes are assembled as a combination of physics modules, services, interface implementations, and driver code which coordinates calling these various pieces. Examples of various physics modules and services connected to this framework are given. (author)

  5. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Eisenbach, M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)


    A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. In conclusion, the transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  6. Ergonomics and simulation-based approach in improving facility layout (United States)

    Abad, Jocelyn D.


    The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.

  7. A Computational Approach for Probabilistic Analysis of Water Impact Simulations (United States)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.


    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  8. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi


    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  9. Simulation approaches to probabilistic structural design at the component level

    International Nuclear Information System (INIS)

    Stancampiano, P.A.


    In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)

  10. Spectral- and size-resolved mass absorption efficiency of mineral dust aerosols in the shortwave spectrum: a simulation chamber study (United States)

    Caponi, Lorenzo; Formenti, Paola; Massabó, Dario; Di Biagio, Claudia; Cazaunau, Mathieu; Pangui, Edouard; Chevaillier, Servanne; Landrot, Gautier; Andreae, Meinrat O.; Kandler, Konrad; Piketh, Stuart; Saeed, Thuraya; Seibert, Dave; Williams, Earle; Balkanski, Yves; Prati, Paolo; Doussin, Jean-François


    This paper presents new laboratory measurements of the mass absorption efficiency (MAE) between 375 and 850 nm for 12 individual samples of mineral dust from different source areas worldwide and in two size classes: PM10. 6 (mass fraction of particles of aerodynamic diameter lower than 10.6 µm) and PM2. 5 (mass fraction of particles of aerodynamic diameter lower than 2.5 µm). The experiments were performed in the CESAM simulation chamber using mineral dust generated from natural parent soils and included optical and gravimetric analyses. The results show that the MAE values are lower for the PM10. 6 mass fraction (range 37-135 × 10-3 m2 g-1 at 375 nm) than for the PM2. 5 (range 95-711 × 10-3 m2 g-1 at 375 nm) and decrease with increasing wavelength as λ-AAE, where the Ångström absorption exponent (AAE) averages between 3.3 and 3.5, regardless of size. The size independence of AAE suggests that, for a given size distribution, the dust composition did not vary with size for this set of samples. Because of its high atmospheric concentration, light absorption by mineral dust can be competitive with black and brown carbon even during atmospheric transport over heavy polluted regions, when dust concentrations are significantly lower than at emission. The AAE values of mineral dust are higher than for black carbon (˜ 1) but in the same range as light-absorbing organic (brown) carbon. As a result, depending on the environment, there can be some ambiguity in apportioning the aerosol absorption optical depth (AAOD) based on spectral dependence, which is relevant to the development of remote sensing of light-absorbing aerosols and their assimilation in climate models. We suggest that the sample-to-sample variability in our dataset of MAE values is related to regional differences in the mineralogical composition of the parent soils. Particularly in the PM2. 5 fraction, we found a strong linear correlation between the dust light-absorption properties and elemental


    International Nuclear Information System (INIS)

    Laurent, Philippe; Titarchuk, Lev


    We present herein a theoretical study of correlations between spectral indexes of X-ray emergent spectra and mass accretion rate ( m-dot ) in black hole (BH) sources, which provide a definitive signature for BHs. It has been firmly established, using the Rossi X-ray Timing Explorer (RXTE) in numerous BH observations during hard-soft state spectral evolution, that the photon index of X-ray spectra increases when m-dot increases and, moreover, the index saturates at high values of m-dot . In this paper, we present theoretical arguments that the observationally established index saturation effect versus mass accretion rate is a signature of the bulk (converging) flow onto the BH. Also, we demonstrate that the index saturation value depends on the plasma temperature of converging flow. We self-consistently calculate the Compton cloud (CC) plasma temperature as a function of mass accretion rate using the energy balance between energy dissipation and Compton cooling. We explain the observable phenomenon, index- m-dot correlations using a Monte Carlo simulation of radiative processes in the innermost part (CC) of a BH source and we account for the Comptonization processes in the presence of thermal and bulk motions, as basic types of plasma motion. We show that, when m-dot increases, BH sources evolve to high and very soft states (HSS and VSS, respectively), in which the strong blackbody(BB)-like and steep power-law components are formed in the resulting X-ray spectrum. The simultaneous detections of these two components strongly depends on sensitivity of high-energy instruments, given that the relative contribution of the hard power-law tail in the resulting VSS spectrum can be very low, which is why, to date RXTE observations of the VSS X-ray spectrum have been characterized by the presence of the strong BB-like component only. We also predict specific patterns for high-energy e-fold (cutoff) energy (E fold ) evolution with m-dot for thermal and dynamical (bulk

  12. Spectral- and size-resolved mass absorption efficiency of mineral dust aerosols in the shortwave spectrum: a simulation chamber study

    Directory of Open Access Journals (Sweden)

    L. Caponi


    Full Text Available This paper presents new laboratory measurements of the mass absorption efficiency (MAE between 375 and 850 nm for 12 individual samples of mineral dust from different source areas worldwide and in two size classes: PM10. 6 (mass fraction of particles of aerodynamic diameter lower than 10.6 µm and PM2. 5 (mass fraction of particles of aerodynamic diameter lower than 2.5 µm. The experiments were performed in the CESAM simulation chamber using mineral dust generated from natural parent soils and included optical and gravimetric analyses. The results show that the MAE values are lower for the PM10. 6 mass fraction (range 37–135  ×  10−3 m2 g−1 at 375 nm than for the PM2. 5 (range 95–711  ×  10−3 m2 g−1 at 375 nm and decrease with increasing wavelength as λ−AAE, where the Ångström absorption exponent (AAE averages between 3.3 and 3.5, regardless of size. The size independence of AAE suggests that, for a given size distribution, the dust composition did not vary with size for this set of samples. Because of its high atmospheric concentration, light absorption by mineral dust can be competitive with black and brown carbon even during atmospheric transport over heavy polluted regions, when dust concentrations are significantly lower than at emission. The AAE values of mineral dust are higher than for black carbon (∼ 1 but in the same range as light-absorbing organic (brown carbon. As a result, depending on the environment, there can be some ambiguity in apportioning the aerosol absorption optical depth (AAOD based on spectral dependence, which is relevant to the development of remote sensing of light-absorbing aerosols and their assimilation in climate models. We suggest that the sample-to-sample variability in our dataset of MAE values is related to regional differences in the mineralogical composition of the parent soils. Particularly in the PM2. 5 fraction, we found a strong

  13. Integration of Phenotypic Metadata and Protein Similarity in Archaea Using a Spectral Bipartitioning Approach

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, Sean D.; Anderson, Iain J; Pati, Amrita; Dalevi, Daniel; Mavromatis, Konstantinos; Kyrpides, Nikos C


    In order to simplify and meaningfully categorize large sets of protein sequence data, it is commonplace to cluster proteins based on the similarity of those sequences. However, it quickly becomes clear that the sequence flexibility allowed a given protein varies significantly among different protein families. The degree to which sequences are conserved not only differs for each protein family, but also is affected by the phylogenetic divergence of the source organisms. Clustering techniques that use similarity thresholds for protein families do not always allow for these variations and thus cannot be confidently used for applications such as automated annotation and phylogenetic profiling. In this work, we applied a spectral bipartitioning technique to all proteins from 53 archaeal genomes. Comparisons between different taxonomic levels allowed us to study the effects of phylogenetic distances on cluster structure. Likewise, by associating functional annotations and phenotypic metadata with each protein, we could compare our protein similarity clusters with both protein function and associated phenotype. Our clusters can be analyzed graphically and interactively online.

  14. Foodsheds in Virtual Water Flow Networks: A Spectral Graph Theory Approach

    Directory of Open Access Journals (Sweden)

    Nina Kshetry


    Full Text Available A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food–water–energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e., water that is virtually embedded in food. In particular, we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.

  15. A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems (United States)

    Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.


    We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.

  16. Simulated annealing band selection approach for hyperspectral imagery (United States)

    Chang, Yang-Lang; Fang, Jyh-Perng; Hsu, Wei-Lieh; Chang, Lena; Chang, Wen-Yen


    In hyperspectral imagery, greedy modular eigenspace (GME) was developed by clustering highly correlated bands into a smaller subset based on the greedy algorithm. Unfortunately, GME is hard to find the optimal set by greedy scheme except by exhaustive iteration. The long execution time has been the major drawback in practice. Accordingly, finding the optimal (or near-optimal) solution is very expensive. Instead of adopting the band-subset-selection paradigm underlying this approach, we introduce a simulated annealing band selection (SABS) approach, which takes sets of non-correlated bands for high-dimensional remote sensing images based on a heuristic optimization algorithm, to overcome this disadvantage. It utilizes the inherent separability of different classes embedded in high-dimensional data sets to reduce dimensionality and formulate the optimal or near-optimal GME feature. Our proposed SABS scheme has a number of merits. Unlike traditional principal component analysis, it avoids the bias problems that arise from transforming the information into linear combinations of bands. SABS can not only speed up the procedure to simultaneously select the most significant features according to the simulated annealing optimization scheme to find GME sets, but also further extend the convergence abilities in the solution space based on simulated annealing method to reach the global optimal or near-optimal solution and escape from local minima. The effectiveness of the proposed SABS is evaluated by NASA MODIS/ASTER (MASTER) airborne simulator data sets and airborne synthetic aperture radar images for land cover classification during the Pacrim II campaign. The performance of our proposed SABS is validated by supervised k-nearest neighbor classifier. The experimental results show that SABS is an effective technique of band subset selection and can be used as an alternative to the existing dimensionality reduction method.

  17. Biomechanical simulation of thorax deformation using finite element approach. (United States)

    Zhang, Guangzhi; Chen, Xian; Ohgi, Junji; Miura, Toshiro; Nakamoto, Akira; Matsumura, Chikanori; Sugiura, Seiryo; Hisada, Toshiaki


    The biomechanical simulation of the human respiratory system is expected to be a useful tool for the diagnosis and treatment of respiratory diseases. Because the deformation of the thorax significantly influences airflow in the lungs, we focused on simulating the thorax deformation by introducing contraction of the intercostal muscles and diaphragm, which are the main muscles responsible for the thorax deformation during breathing. We constructed a finite element model of the thorax, including the rib cage, intercostal muscles, and diaphragm. To reproduce the muscle contractions, we introduced the Hill-type transversely isotropic hyperelastic continuum skeletal muscle model, which allows the intercostal muscles and diaphragm to contract along the direction of the fibres with clinically measurable muscle activation and active force-length relationship. The anatomical fibre orientations of the intercostal muscles and diaphragm were introduced. Thorax deformation consists of movements of the ribs and diaphragm. By activating muscles, we were able to reproduce the pump-handle and bucket-handle motions for the ribs and the clinically observed motion for the diaphragm. In order to confirm the effectiveness of this approach, we simulated the thorax deformation during normal quiet breathing and compared the results with four-dimensional computed tomography (4D-CT) images for verification. Thorax deformation can be simulated by modelling the respiratory muscles according to continuum mechanics and by introducing muscle contractions. The reproduction of representative motions of the ribs and diaphragm and the comparison of the thorax deformations during normal quiet breathing with 4D-CT images demonstrated the effectiveness of the proposed approach. This work may provide a platform for establishing a computational mechanics model of the human respiratory system.

  18. Discrete event simulation versus conventional system reliability analysis approaches

    DEFF Research Database (Denmark)

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  19. Modelling and simulating retail management practices: a first approach


    Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris


    Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...

  20. Modeling and spectral simulation of matrix-isolated molecules by density functional calculations: a case study on formic acid dimer. (United States)

    Ito, Fumiyuki


    The supermolecule approach has been used to model molecules embedded in solid argon matrix, wherein interaction between the guest and the host atoms in the first solvation shell is evaluated with the use of density functional calculations. Structural stability and simulated spectra have been obtained for formic acid dimer (FAD)-Ar(n) (n = 21-26) clusters. The calculations at the B971∕6-31++G(3df,3pd) level have shown that the tetrasubstitutional site on Ar(111) plane is likely to incorporate FAD most stably, in view of consistency with the matrix shifts available experimentally.

  1. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.


    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  2. Spectral diagnostics of late-type stars: Non-LTE and approach

    Directory of Open Access Journals (Sweden)

    Collet R.


    Full Text Available We determine effective temperature, metallicity, and microturbulence for a number of well-studied late-type stars. We use the new NLTE atomic model of Fe, and discuss the results for the MARCS models, as well as for the spatial and temporal averages of full 3D hydrodynamical simulations of stellar convection. It is shown that, contrary to the mean 3D models, certain limitations shall be imposed on the line formation and spectrum synthesis calculations with classical hydrostatic 1D models to obtain physically-realistic results.

  3. Co-clustering Analysis of Weblogs Using Bipartite Spectral Projection Approach

    DEFF Research Database (Denmark)

    Xu, Guandong; Zong, Yu; Dolog, Peter


    Web clustering is an approach for aggregating Web objects into various groups according to underlying relationships among them. Finding co-clusters of Web objects is an interesting topic in the context of Web usage mining, which is able to capture the underlying user navigational interest...

  4. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    International Nuclear Information System (INIS)

    Saito, Masatoshi


    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm 2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues

  5. Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Sisniega, A.; Vaquero, J. J., E-mail: [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid ES28911 (Spain); Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007 (Spain); Desco, M. [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid ES28911 (Spain); Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007 (Spain); Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid ES28029 (Spain)


    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modified to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in

  6. Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources

    International Nuclear Information System (INIS)

    Sisniega, A.; Vaquero, J. J.; Desco, M.


    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modified to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in

  7. A novel horizontal to vertical spectral ratio approach in a wired structural health monitoring system


    F. P. Pentaris


    This work studies the effect ambient seismic noise can have on building constructions, in comparison with the traditional study of strong seismic motion in buildings, for the purpose of structural health monitoring. Traditionally, engineers have observed the effect of earthquakes on buildings by usage of seismometers at various levels. A new approach is proposed in which acceleration recordings of ambient seismic noise are used and horizontal to vertical spectra ratio (HVSR)...

  8. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    International Nuclear Information System (INIS)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David


    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  9. A new approach to the spectral analysis of liquid membrane oscillators by Gábor transformation

    DEFF Research Database (Denmark)

    Płocharska-Jankowska, E.; Szpakowska, M.; Mátéfi-Tempfli, Stefan


    is presented here based on Gábor transformation allowing one to obtain power spectra of any kind of oscillations that can be met experimentally. The proposed Gábor analysis is applied to a liquid membrane oscillator containing a cationic surfactant. It was found that the power spectra are strongly influenced......Liquid membrane oscillators very frequently have an irregular oscillatory behavior. Fourier transformation cannot be used for these nonstationary oscillations to establish their power spectra. This important point seems to be overlooked in the field of chemical oscillators. A new approach...

  10. Cross spectral, active and passive approach to face recognition for improved performance (United States)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.


    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  11. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail:; Eisenbach, M.; Burress, T.A.


    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  12. A unified approach for suppressing sidelobes arising in the spectral response of rugate filters

    International Nuclear Information System (INIS)

    Abo-Zahhad, M.; Bataineh, M.


    This paper suggests a universal approach to reduce the side lobes which usually appear at both sides of a stop band of a ru gate filter. Both quin tic matching layers and anodization functions are to used to improve the filter's response. The proposed technique could be used to control the ripples level by properly choosing the refractive index profile after amending it to include mat aching layers and/or modulating its profile with a slowly varying anodization (or ta perine) function. Two illustrative examples are given to demonstrate the robustness of the proposed technique. The given examples suggest that combining both effects on the index of refraction profile lead to the lowest possible ripple level. A multichannel filter response is obtained by wavelet cons traction of the refractive index profile with potential applications in multimode lasers and wavelength division multiple xin networks. The obtained results demonstrate the applicability of the adopted approach to design ripple free ru gate filters. The extension to stack filters and other wave guiding structures are also visible. (authors). 14 refs., 8 figs

  13. Fixed target combined with spectral mapping: approaching 100% hit rates for serial crystallography. (United States)

    Oghbaey, Saeed; Sarracini, Antoine; Ginn, Helen M; Pare-Labrosse, Olivier; Kuo, Anling; Marx, Alexander; Epp, Sascha W; Sherrell, Darren A; Eger, Bryan T; Zhong, Yinpeng; Loch, Rolf; Mariani, Valerio; Alonso-Mori, Roberto; Nelson, Silke; Lemke, Henrik T; Owen, Robin L; Pearson, Arwen R; Stuart, David I; Ernst, Oliver P; Mueller-Werkmeister, Henrike M; Miller, R J Dwayne


    The advent of ultrafast highly brilliant coherent X-ray free-electron laser sources has driven the development of novel structure-determination approaches for proteins, and promises visualization of protein dynamics on sub-picosecond timescales with full atomic resolution. Significant efforts are being applied to the development of sample-delivery systems that allow these unique sources to be most efficiently exploited for high-throughput serial femtosecond crystallography. Here, the next iteration of a fixed-target crystallography chip designed for rapid and reliable delivery of up to 11 259 protein crystals with high spatial precision is presented. An experimental scheme for predetermining the positions of crystals in the chip by means of in situ spectroscopy using a fiducial system for rapid, precise alignment and registration of the crystal positions is presented. This delivers unprecedented performance in serial crystallography experiments at room temperature under atmospheric pressure, giving a raw hit rate approaching 100% with an effective indexing rate of approximately 50%, increasing the efficiency of beam usage and allowing the method to be applied to systems where the number of crystals is limited.

  14. Numerical Simulation of Incremental Sheet Forming by Simplified Approach (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.


    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  15. Amp: A modular approach to machine learning in atomistic simulations (United States)

    Khorshidi, Alireza; Peterson, Andrew A.


    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which

  16. Spectral Counting Approach to Measure Selectivity of High-Resolution LC-MS Methods for Environmental Analysis. (United States)

    Renaud, Justin B; Sabourin, Lyne; Topp, Edward; Sumarah, Mark W


    Advances in high-resolution mass spectrometers have allowed for the development of nontargeted screening methods, where data sets can be archived and retrospectively mined as new environmental contaminants are identified. We have developed a spectral counting approach to calculate the selectivities of LC-MS acquisition modes taking mass accuracy, sample matrix, and the analyte properties into account. The selectivities of high-resolution MS (HRMS) alone or in combination with all-ion-fragmentation (AIF), data-independent-acquisition (DIA), and data-dependent-acquisition (DDA) modes, performed on a Q-Exactive Orbitrap were compared by retrospectively screening surface water samples for 95 pharmaceuticals. Samples were reanalyzed using targeted LC-MS/MS to confirm the accuracy of each acquisition method and to quantitate the 29 putatively detected drugs. LC-HRMS provided the lowest calculated selectivities and accordingly produced the highest number of false positives (6). In contrast, DDA provided the highest selectivities, yielding only one false positive; however, it was bias toward the most intense signals resulting in the detection of only 10 compounds. AIF had lower selectivities than traditional LC-MS/MS, produced one false positive and did not detect 6 confirmed compounds. Because of the high-quality archived data, DIA selectivities were better than traditional LC-MS/MS, showed no bias toward the most intense signals, achieved low limits of detection, and confidently detected the greatest number of pharmaceuticals (22) with only one false positive. This spectral counting method can be used across different instrument platforms or samples and provides a robust and empirical estimation of selectivities to give more confident detection of trace analytes.

  17. Thermography-based blood flow imaging in human skin of the hands and feet: a spectral filtering approach. (United States)

    Sagaidachnyi, A A; Fomin, A V; Usanov, D A; Skripal, A V


    The determination of the relationship between skin blood flow and skin temperature dynamics is the main problem in thermography-based blood flow imaging. Oscillations in skin blood flow are the source of thermal waves propagating from micro-vessels toward the skin's surface, as assumed in this study. This hypothesis allows us to use equations for the attenuation and dispersion of thermal waves for converting the temperature signal into the blood flow signal, and vice versa. We developed a spectral filtering approach (SFA), which is a new technique for thermography-based blood flow imaging. In contrast to other processing techniques, the SFA implies calculations in the spectral domain rather than in the time domain. Therefore, it eliminates the need to solve differential equations. The developed technique was verified within 0.005-0.1 Hz, including the endothelial, neurogenic and myogenic frequency bands of blood flow oscillations. The algorithm for an inverse conversion of the blood flow signal into the skin temperature signal is addressed. The examples of blood flow imaging of hands during cuff occlusion and feet during heating of the back are illustrated. The processing of infrared (IR) thermograms using the SFA allowed us to restore the blood flow signals and achieve correlations of about 0.8 with a waveform of a photoplethysmographic signal. The prospective applications of the thermography-based blood flow imaging technique include non-contact monitoring of the blood supply during engraftment of skin flaps and burns healing, as well the use of contact temperature sensors to monitor low-frequency oscillations of peripheral blood flow.

  18. An approach to developing an integrated pyro processing simulator

    International Nuclear Information System (INIS)

    Lee, H.J.; Ko, I.W.; Choi, S.Y.; Kim, S.K.; Kim, I.T.; Lee, H.S.


    Full-text:Pyro processing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyro processing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyro processing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the worlds largest integrated pyro processing operation. In order to complement or overcome such limited test-based research, a pyro processing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggested a Modelling architecture for the development of a multi-purpose pyro processing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyro processing Modelling and simulation activity at KAERI, and also predicted its path forward. (author)


    Energy Technology Data Exchange (ETDEWEB)

    R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau


    Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.

  20. Simulation of IRIS 2010 missile experiments for validation of integral simulation approach

    International Nuclear Information System (INIS)

    Siefert, Alexander; Henkel, Fritz-Otto


    Conclusion: Used material model and model approach shows acceptable results in comparison with test data, but further improvements are possible. Tri-axial Test: The material model must be improved to capture the higher strain values for test with confining pressure. Possible solution: Defining separate damage curves for different confining pressures. Flexural Test: Model approach has to be approved regarding the swing back phase. Possible first step: Investigation of crack closing –tensional recovery. Punching Test: Challenge for this simulation is the element erosions. Solution: Defining a reliable deletion criteria is possible by averaging several case studies. Alternative is the application of SPH-method. In General: Material properties showed differences to code definitions. Therefore a required input for detailed analysis of local damage are test data (especially for existing structures). Microscopic cracking can’t be investigated using a homogenous material

  1. An Exploration of the Triplet Periodicity in Nucleotide Sequences with a Mature Self-Adaptive Spectral Rotation Approach

    Directory of Open Access Journals (Sweden)

    Bo Chen


    Full Text Available Previously, for predicting coding regions in nucleotide sequences, a self-adaptive spectral rotation (SASR method has been developed, based on a universal statistical feature of the coding regions, named triplet periodicity (TP. It outputs a random walk, that is, TP walk, in the complex plane for the query sequence. Each step in the walk is corresponding to a position in the sequence and generated from a long-term statistic of the TP in the sequence. The coding regions (TP intensive are then visually discriminated from the noncoding ones (without TP, in the TP walk. In this paper, the behaviors of the walks for random nucleotide sequences are further investigated qualitatively. A slightly leftward trend (a negative noise in such walks is observed, which is not reported in the previous SASR literatures. An improved SASR, named the mature SASR, is proposed, in order to eliminate the noise and correct the TP walks. Furthermore, a potential sequence pattern opposite to the TP persistent pattern, that is, the TP antipersistent pattern, is explored. The applications of the algorithms on simulated datasets show their capabilities in detecting such a potential sequence pattern.

  2. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim


    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  3. On the spectrum of the Laplace operator of metric graphs attached at a vertex-spectral determinant approach

    International Nuclear Information System (INIS)

    Texier, Christophe


    We consider a metric graph G made of two graphs G 1 and G 2 attached at one point. We derive a formula relating the spectral determinant of the Laplace operator S G (γ)=det(γ-Δ) in terms of the spectral determinants of the two subgraphs. The result is generalized to describe the attachment of n graphs. The formulae are also valid for the spectral determinant of the Schroedinger operator det(γ-Δ+V(x))

  4. Mapped Chebyshev Pseudo-Spectral Method for Simulating the Shear Wave Propagation in the Plane of Symmetry of a Transversely Isotropic Viscoelastic Medium (United States)

    Qiang, Bo; Brigham, John C.; McGough, Robert J.; Greenleaf, James F.; Urban, Matthew W.


    Shear wave elastography is a versatile technique that is being applied to many organs. However, in tissues that exhibit anisotropic material properties, special care must be taken to estimate shear wave propagation accurately and efficiently. A two-dimensional simulation method is implemented to simulate the shear wave propagation in the plane of symmetry in transversely isotropic viscoelastic media. The method uses a mapped Chebyshev pseudo-spectral method to calculate the spatial derivatives and an Adams-Bashforth-Moulton integrator with variable step sizes for time marching. The boundaries of the two-dimensional domain are surrounded by perfectly matched layers (PML) to approximate an infinite domain and minimize reflection errors. In an earlier work, we proposed a solution for estimating the apparent shear wave elasticity and viscosity of the spatial group velocity as a function of rotation angle through a low frequency approximation by a Taylor expansion. With the solver implemented in MATLAB, the simulated results in this paper match well with the theory. Compared to the finite element method (FEM) simulations we used before, the pseudo-spectral solver consumes less memory and is faster and achieves better accuracy. PMID:27221812

  5. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise (United States)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno


    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  6. Detecting deforestation with a spectral change detection approach using multitemporal Landsat data: a case study of Kinabalu Park, Sabah, Malaysia. (United States)

    Phua, Mui-How; Tsuyuki, Satoshi; Furuya, Naoyuki; Lee, Jung Soo


    Tropical deforestation is occurring at an alarming rate, threatening the ecological integrity of protected areas. This makes it vital to regularly assess protected areas to confirm the efficacy of measures that protect that area from clearing. Satellite remote sensing offers a systematic and objective means for detecting and monitoring deforestation. This paper examines a spectral change approach to detect deforestation using pattern decomposition (PD) coefficients from multitemporal Landsat data. Our results show that the PD coefficients for soil and vegetation can be used to detect deforestation using change vector analysis (CVA). CVA analysis demonstrates that deforestation in the Kinabalu area, Sabah, Malaysia has significantly slowed from 1.2% in period 1 (1973 and 1991) to 0.1% in period 2 (1991 and 1996). A comparison of deforestation both inside and outside Kinabalu Park has highlighted the effectiveness of the park in protecting the tropical forest against clearing. However, the park is still facing pressure from the area immediately surrounding the park (the 1 km buffer zone) where the deforestation rate has remained unchanged.

  7. A Robust Concurrent Approach for Road Extraction and Urbanization Monitoring Based on Superpixels Acquired from Spectral Remote Sensing Images (United States)

    Seppke, Benjamin; Dreschler-Fischer, Leonie; Wilms, Christian


    The extraction of road signatures from remote sensing images as a promising indicator for urbanization is a classical segmentation problem. However, some segmentation algorithms often lead to non-sufficient results. One way to overcome this problem is the usage of superpixels, that represent a locally coherent cluster of connected pixels. Superpixels allow flexible, highly adaptive segmentation approaches due to the possibility of merging as well as splitting and form new basic image entities. On the other hand, superpixels require an appropriate representation containing all relevant information about topology and geometry to maximize their advantages.In this work, we present a combined geometric and topological representation based on a special graph representation, the so-called RS-graph. Moreover, we present the use of the RS-graph by means of a case study: the extraction of partially occluded road networks in rural areas from open source (spectral) remote sensing images by tracking. In addition, multiprocessing and GPU-based parallelization is used to speed up the construction of the representation and the application.

  8. Simulation and analysis of grating-integrated quantum dot infrared detectors for spectral response control and performance enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Oh Kim, Jun [Center for High Technology Materials, University of New Mexico, Albuquerque, New Mexico 87106 (United States); Division of Industrial Metrology, Korea Research Institute of Standards and Science, Daejeon 305-340 (Korea, Republic of); Ku, Zahyun; Urbas, Augustine, E-mail:, E-mail: [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); Krishna, Sanjay [Center for High Technology Materials, University of New Mexico, Albuquerque, New Mexico 87106 (United States); Kang, Sang-Woo; Jun Lee, Sang [Division of Industrial Metrology, Korea Research Institute of Standards and Science, Daejeon 305-340 (Korea, Republic of); Chul Jun, Young, E-mail:, E-mail: [Department of Physics, Inha University, Incheon 402-751 (Korea, Republic of)


    We propose and analyze a novel detector structure for pixel-level multispectral infrared imaging. More specifically, we investigate the device performance of a grating-integrated quantum dots-in-a-well photodetector under backside illumination. Our design uses 1-dimensional grating patterns fabricated directly on a semiconductor contact layer and, thus, adds a minimal amount of additional effort to conventional detector fabrication flows. We show that we can gain wide-range control of spectral response as well as large overall detection enhancement by adjusting grating parameters. For small grating periods, the spectral responsivity gradually changes with parameters. We explain this spectral tuning using the Fabry–Perot resonance and effective medium theory. For larger grating periods, the responsivity spectra get complicated due to increased diffraction into the active region, but we find that we can obtain large enhancement of the overall detector performance. In our design, the spectral tuning range can be larger than 1 μm, and, compared to the unpatterned detector, the detection enhancement can be greater than 92% and 148% for parallel and perpendicular polarizations. Our work can pave the way for practical, easy-to-fabricate detectors, which are highly useful for many infrared imaging applications.

  9. Approaches to simulate impact damages on aeronautical composite structures (United States)

    Sanga, R. P. Lemanle; Garnier, C.; Pantalé, O.


    Impact damage is one of the most critical aggressions for composite structures in aeronautical applications. Consequences of a high/low velocity and high/low energy impacts are very important to investigate. It is usually admitted that the most critical configuration is the Barely Visible Impact Damage (BVID), with impact energy of about 25 J, where some internal damages, invisible on the impacted surface of the specimen, drastically reduce the residual properties of the impacted material. In this work we highlight by the finite element simulation, the damage initiation and propagation process and the size of the defaults created by low velocity impact. Two approaches were developed: the first one is the layup technic and the second one is based on the cohesive element technic. Both technics show the plies damages by the Hashin's criteria. Moreover the second one gives the delamination damages with regards to the Benzeggah-Kenane criteria. The validation of these models is done by confrontation with some experimental results.

  10. Performance analysis of bullet trajectory estimation: Approach, simulation, and experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ng, L.C.; Karr, T.J.


    This paper describes an approach to estimate a bullet`s trajectory from a time sequence of angles-only observations from a high-speed camera, and analyzes its performance. The technique is based on fitting a ballistic model of a bullet in flight along with unknown source location parameters to a time series of angular observations. The theory is developed to precisely reconstruct, from firing range geometry, the actual bullet trajectory as it appeared on the focal plane array and in real space. A metric for measuring the effective trajectory track error is also presented. Detailed Monte-Carlo simulations assuming different bullet ranges, shot-angles, camera frame rates, and angular noise show that angular track error can be as small as 100 {mu}rad for a 2 mrad/pixel sensor. It is also shown that if actual values of bullet ballistic parameters were available, the bullet s source location variables, and the angles of flight information could also be determined.

  11. Striving for Better Medical Education: the Simulation Approach. (United States)

    Sakakushev, Boris E; Marinov, Blagoi I; Stefanova, Penka P; Kostianev, Stefan St; Georgiou, Evangelos K


    Medical simulation is a rapidly expanding area within medical education due to advances in technology, significant reduction in training hours and increased procedural complexity. Simulation training aims to enhance patient safety through improved technical competency and eliminating human factors in a risk free environment. It is particularly applicable to a practical, procedure-orientated specialties. Simulation can be useful for novice trainees, experienced clinicians (e.g. for revalidation) and team building. It has become a cornerstone in the delivery of medical education, being a paradigm shift in how doctors are educated and trained. Simulation must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and should not depend on the simulation platforms used. Conversely, ingraining of poor practice may occur in the absence of adequate supervision, and equipment malfunction during the simulation can break the immersion and disrupt any learning that has occurred. Despite the presence of high technology, there is a substantial learning curve for both learners and facilitators. The technology of simulation continues to advance, offering devices capable of improved fidelity in virtual reality simulation, more sophisticated procedural practice and advanced patient simulators. Simulation-based training has also brought about paradigm shifts in the medical and surgical education arenas and ensured that the scope and impact of simulation will continue to broaden.

  12. An exploratory study on the aerosol height retrieval from OMI measurements of the 477 nm O2 - O2 spectral band using a neural network approach (United States)

    Chimot, Julien; Pepijn Veefkind, J.; Vlemmix, Tim; de Haan, Johan F.; Amiridis, Vassilis; Proestakis, Emmanouil; Marinou, Eleni; Levelt, Pieternel F.


    This paper presents an exploratory study on the aerosol layer height (ALH) retrieval from the OMI 477 nm O2 - O2 spectral band. We have developed algorithms based on the multilayer perceptron (MLP) neural network (NN) approach and applied them to 3-year (2005-2007) OMI cloud-free scenes over north-east Asia, collocated with MODIS Aqua aerosol product. In addition to the importance of aerosol altitude for climate and air quality objectives, our long-term motivation is to evaluate the possibility of retrieving ALH for potential future improvements of trace gas retrievals (e.g. NO2, HCHO, SO2) from UV-visible air quality satellite measurements over scenes including high aerosol concentrations. This study presents a first step of this long-term objective and evaluates, from a statistic point of view, an ensemble of OMI ALH retrievals over a long time period of 3 years covering a large industrialized continental region. This ALH retrieval relies on the analysis of the O2 - O2 slant column density (SCD) and requires an accurate knowledge of the aerosol optical thickness, τ. Using MODIS Aqua τ(550 nm) as a prior information, absolute seasonal differences between the LIdar climatology of vertical Aerosol Structure for space-based lidar simulation (LIVAS) and average OMI ALH, over scenes with MODIS τ(550 nm) ≥ 1. 0, are in the range of 260-800 m (assuming single scattering albedo ω0 = 0. 95) and 180-310 m (assuming ω0 = 0. 9). OMI ALH retrievals depend on the assumed aerosol single scattering albedo (sensitivity up to 660 m) and the chosen surface albedo (variation less than 200 m between OMLER and MODIS black-sky albedo). Scenes with τ ≤ 0. 5 are expected to show too large biases due to the little impact of particles on the O2 - O2 SCD changes. In addition, NN algorithms also enable aerosol optical thickness retrieval by exploring the OMI reflectance in the continuum. Comparisons with collocated MODIS Aqua show agreements between -0. 02 ± 0. 45 and -0. 18 ± 0



    Tam, Bit-Shun


    This is a review of a coherent body of knowlegde, which perhaps deserves the name of the geometric spectral theory of positive linear operators (in finite dimension), developed by this author and his co-author Hans Schncider (or S.F. Wu) over the past decade. The following topics are covered, besides others: combinatorial spectral theory of nonnegative matrices, Collatz-Wielandt sets (or numbers) associated with a cone-preserving map, distinguished eigenvalues, cone-solvability theorems...

  14. A single-shot nonlinear autocorrelation approach for time-resolved physics in the vacuum ultraviolet spectral range

    International Nuclear Information System (INIS)

    Rompotis, Dimitrios


    In this work, a single-shot temporal metrology scheme operating in the vacuum-extreme ultraviolet spectral range has been designed and experimentally implemented. Utilizing an anti-collinear geometry, a second-order intensity autocorrelation measurement of a vacuum ultraviolet pulse can be performed by encoding temporal delay information on the beam propagation coordinate. An ion-imaging time-of-flight spectrometer, offering micrometer resolution has been set-up for this purpose. This instrument enables the detection of a magnified image of the spatial distribution of ions exclusively generated by direct two-photon absorption in the combined counter-propagating pulse focus and thus obtain the second-order intensity autocorrelation measurement on a single-shot basis. Additionally, an intense VUV light source based on high-harmonic generation has been experimentally realized. It delivers intense sub-20 fs Ti:Sa fifth-harmonic pulses utilizing a loose-focusing geometry in a long Ar gas cell. The VUV pulses centered at 161.8 nm reach pulse energies of 1.1 μJ per pulse, while the corresponding pulse duration is measured with a second-order, fringe-resolved autocorrelation scheme to be 18 ± 1 fs on average. Non-resonant, two-photon ionization of Kr and Xe and three-photon ionization of Ne verify the fifth-harmonic pulse intensity and indicate the feasibility of multi-photon VUV pump/VUV probe studies of ultrafast atomic and molecular dynamics. Finally, the extended functionally of the counter-propagating pulse metrology approach is demonstrated by a single-shot VUV pump/VUV probe experiment aiming at the investigation of ultrafast dissociation dynamics of O 2 excited in the Schumann-Runge continuum at 162 nm.

  15. A spectral expansion approach for geodetic slip inversion: implications for the downdip rupture limits of oceanic and continental megathrust earthquakes (United States)

    Xu, Xiaohua; Sandwell, David T.; Bassett, Dan


    We have developed a data-driven spectral expansion inversion method to place bounds on the downdip rupture depth of large megathrust earthquakes having good InSAR and GPS coverage. This inverse theory approach is used to establish the set of models that are consistent with the observations. In addition, the inverse theory method demonstrates that the spatial resolution of the slip models depends on two factors, the spatial coverage and accuracy of the surface deformation measurements, and the slip depth. Application of this method to the 2010 Mw 8.8 Maule Earthquake shows a slip maximum at 19 km depth tapering to zero at ˜40 km depth. In contrast, the continent-continent megathrust earthquakes of the Himalayas, for example 2015 Mw 7.8 Gorkha Earthquake, shows a slip maximum at 9 km depth tapering to zero at ˜18 km depth. The main question is why is the maximum slip depth of the continental megathrust earthquake only 50 per cent of that observed in oceanic megathrust earthquakes. To understand this difference, we have developed a simple 1-D heat conduction model that includes the effects of uplift and surface erosion. The relatively low erosion rates above the ocean megathrust results in a geotherm where the 450-600 °C transition is centred at ˜40 km depth. In contrast, the relatively high average erosion rates in the Himalayas of ˜1 mm yr-1 results in a geotherm where the 450-600 °C transition is centred at ˜20 km. Based on these new observations and models, we suggest that the effect of erosion rate on temperature explains the difference in the maximum depth of the seismogenic zone between Chile and the Himalayas.

  16. Sensitivity of coronal loop sausage mode frequencies and decay rates to radial and longitudinal density inhomogeneities: a spectral approach (United States)

    Cally, Paul S.; Xiong, Ming


    Fast sausage modes in solar magnetic coronal loops are only fully contained in unrealistically short dense loops. Otherwise they are leaky, losing energy to their surrounds as outgoing waves. This causes any oscillation to decay exponentially in time. Simultaneous observations of both period and decay rate therefore reveal the eigenfrequency of the observed mode, and potentially insight into the tubes’ nonuniform internal structure. In this article, a global spectral description of the oscillations is presented that results in an implicit matrix eigenvalue equation where the eigenvalues are associated predominantly with the diagonal terms of the matrix. The off-diagonal terms vanish identically if the tube is uniform. A linearized perturbation approach, applied with respect to a uniform reference model, is developed that makes the eigenvalues explicit. The implicit eigenvalue problem is easily solved numerically though, and it is shown that knowledge of the real and imaginary parts of the eigenfrequency is sufficient to determine the width and density contrast of a boundary layer over which the tubes’ enhanced internal densities drop to ambient values. Linearized density kernels are developed that show sensitivity only to the extreme outside of the loops for radial fundamental modes, especially for small density enhancements, with no sensitivity to the core. Higher radial harmonics do show some internal sensitivity, but these will be more difficult to observe. Only kink modes are sensitive to the tube centres. Variation in internal and external Alfvén speed along the loop is shown to have little effect on the fundamental dimensionless eigenfrequency, though the associated eigenfunction becomes more compact at the loop apex as stratification increases, or may even displace from the apex.

  17. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson


    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  18. Spectral Analysis on Time-Course Expression Data: Detecting Periodic Genes Using a Real-Valued Iterative Adaptive Approach

    Directory of Open Access Journals (Sweden)

    Kwadwo S. Agyepong


    Full Text Available Time-course expression profiles and methods for spectrum analysis have been applied for detecting transcriptional periodicities, which are valuable patterns to unravel genes associated with cell cycle and circadian rhythm regulation. However, most of the proposed methods suffer from restrictions and large false positives to a certain extent. Additionally, in some experiments, arbitrarily irregular sampling times as well as the presence of high noise and small sample sizes make accurate detection a challenging task. A novel scheme for detecting periodicities in time-course expression data is proposed, in which a real-valued iterative adaptive approach (RIAA, originally proposed for signal processing, is applied for periodogram estimation. The inferred spectrum is then analyzed using Fisher’s hypothesis test. With a proper -value threshold, periodic genes can be detected. A periodic signal, two nonperiodic signals, and four sampling strategies were considered in the simulations, including both bursts and drops. In addition, two yeast real datasets were applied for validation. The simulations and real data analysis reveal that RIAA can perform competitively with the existing algorithms. The advantage of RIAA is manifested when the expression data are highly irregularly sampled, and when the number of cycles covered by the sampling time points is very reduced.

  19. Sensing the Sentence: An Embodied Simulation Approach to Rhetorical Grammar (United States)

    Rule, Hannah J.


    This article applies the neuroscientific concept of embodied simulation--the process of understanding language through visual, motor, and spatial modalities of the body--to rhetorical grammar and sentence-style pedagogies. Embodied simulation invigorates rhetorical grammar instruction by attuning writers to the felt effects of written language,…

  20. Communicating Insights from Complex Simulation Models: A Gaming Approach. (United States)

    Vennix, Jac A. M.; Geurts, Jac L. A.


    Describes design principles followed in developing an interactive microcomputer-based simulation to study financial and economic aspects of the Dutch social security system. The main goals are to improve participants' insights into the formal simulation model, and to improve policy development skills. Plans for future research are also discussed.…

  1. Overview of Computer Simulation Modeling Approaches and Methods (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett


    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  2. Simulating a Range of Regolith Porosities in the Lab: An Investigation into the Effects of Porosity on Spectral Measurements of Olivine (United States)

    Evans, R.; Bowles, N. E.; Donaldson Hanna, K. L.


    Our current understanding of the composition of planetary bodies primarily comes from remote sensing spectroscopic observations. The interpretation of spectroscopic data requires analogue mineral spectra measured in the lab under appropriate environmental conditions.This is particularly true in the thermal infrared. At these wavelengths porosity, particle size, and near-surface environmental conditions have significant effects on the wavelength position and spectral contrast of diagnostic features. To isolate the effects due to porosity, diffuse reflectance measurements were made from 2.5 to 25 µm of a fine particulate San Carlos olivine sample (<25 µm). An experimental set-up was developed to prepare the olivine sample with a range of porosities (40% to 85%). The olivine sample, prepared with two different porosities (45% and 84%), was also measured in thermal emission from 6 to 25 µm in the University of Oxford's Simulated Lunar Environment Chamber. When measured in diffuse reflectance, we find that as the porosity increases the Christiansen feature (CF, a reflection minimum or emissivity maximum near 8 µm) shifts to longer wavelengths. In the thermal emissivity spectral measurements, we see no discernible shift in the CF position as the porosity changes. In both reflectance and emission the strength and position of the transparency feature (the spectral region from 11 to 13 µm where volume scattering dominates) behaves as expected, as the strength of the feature increases with porosity. In reflectance the relative strength of the reststrahlen bands (RB) were not observed to change systematically with porosity. In this presentation we provide details of our experimental set-up, the range of porosities simulated in the lab, and our spectroscopic results. These new measurements place important constraints for interpreting remote sensing measurements of planetary bodies.

  3. Theoretical Characterization of the Spectral Density of the Water-Soluble Chlorophyll-Binding Protein from Combined Quantum Mechanics/Molecular Mechanics Molecular Dynamics Simulations. (United States)

    Rosnik, Andreana M; Curutchet, Carles


    Over the past decade, both experimentalists and theorists have worked to develop methods to describe pigment-protein coupling in photosynthetic light-harvesting complexes in order to understand the molecular basis of quantum coherence effects observed in photosynthesis. Here we present an improved strategy based on the combination of quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations and excited-state calculations to predict the spectral density of electronic-vibrational coupling. We study the water-soluble chlorophyll-binding protein (WSCP) reconstituted with Chl a or Chl b pigments as the system of interest and compare our work with data obtained by Pieper and co-workers from differential fluorescence line-narrowing spectra (Pieper et al. J. Phys. Chem. B 2011, 115 (14), 4042-4052). Our results demonstrate that the use of QM/MM MD simulations where the nuclear positions are still propagated at the classical level leads to a striking improvement of the predicted spectral densities in the middle- and high-frequency regions, where they nearly reach quantitative accuracy. This demonstrates that the so-called "geometry mismatch" problem related to the use of low-quality structures in QM calculations, not the quantum features of pigments high-frequency motions, causes the failure of previous studies relying on similar protocols. Thus, this work paves the way toward quantitative predictions of pigment-protein coupling and the comprehension of quantum coherence effects in photosynthesis.

  4. Simulation of ultrasonic wave propagation in anisotropic poroelastic bone plate using hybrid spectral/finite element method. (United States)

    Nguyen, Vu-Hieu; Naili, Salah


    This paper deals with the modeling of guided waves propagation in in vivo cortical long bone, which is known to be anisotropic medium with functionally graded porosity. The bone is modeled as an anisotropic poroelastic material by using Biot's theory formulated in high frequency domain. A hybrid spectral/finite element formulation has been developed to find the time-domain solution of ultrasonic waves propagating in a poroelastic plate immersed in two fluid halfspaces. The numerical technique is based on a combined Laplace-Fourier transform, which allows to obtain a reduced dimension problem in the frequency-wavenumber domain. In the spectral domain, as radiation conditions representing infinite fluid halfspaces may be exactly introduced, only the heterogeneous solid layer needs to be analyzed by using finite element method. Several numerical tests are presented showing very good performance of the proposed procedure. A preliminary study on the first arrived signal velocities computed by using equivalent elastic and poroelastic models will be presented. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Exploring Simulation Utilization and Simulation Evaluation Practices and Approaches in Undergraduate Nursing Education (United States)

    Zitzelsberger, Hilde; Coffey, Sue; Graham, Leslie; Papaconstantinou, Efrosini; Anyinam, Charles


    Simulation-based learning (SBL) is rapidly becoming one of the most significant teaching-learning-evaluation strategies available in undergraduate nursing education. While there is indication within the literature and anecdotally about the benefits of simulation, abundant and strong evidence that supports the effectiveness of simulation for…

  6. A Hands-on Approach to Evolutionary Simulation

    DEFF Research Database (Denmark)

    Valente, Marco; Andersen, Esben Sloth


    in an industry (or an economy). To abbreviate we call such models NelWin models. The new system for the programming and simulation of such models is called the Laboratory for simulation development - abbreviated as Lsd. The paper is meant to allow readers to use the Lsd version of a basic NelWin model: observe...... the model content, run the simulation, interpret the results, modify the parameterisation, etc. Since the paper deals with the implementation of a fairly complex set of models in a fairly complex programming and simulation system, it does not contain full documentation of NelWin and Lsd. Instead we hope...... to give the reader a first introduction to NelWin and Lsd and inspire a further exploration of them....

  7. Simulation of electron spin resonance spectroscopy in diverse environments: An integrated approach (United States)

    Zerbetto, Mirco; Polimeno, Antonino; Barone, Vincenzo


    We discuss in this work a new software tool, named E-SpiReS (Electron Spin Resonance Simulations), aimed at the interpretation of dynamical properties of molecules in fluids from electron spin resonance (ESR) measurements. The code implements an integrated computational approach (ICA) for the calculation of relevant molecular properties that are needed in order to obtain spectral lines. The protocol encompasses information from atomistic level (quantum mechanical) to coarse grained level (hydrodynamical), and evaluates ESR spectra for rigid or flexible single or multi-labeled paramagnetic molecules in isotropic and ordered phases, based on a numerical solution of a stochastic Liouville equation. E-SpiReS automatically interfaces all the computational methodologies scheduled in the ICA in a way completely transparent for the user, who controls the whole calculation flow via a graphical interface. Parallelized algorithms are employed in order to allow running on calculation clusters, and a web applet Java has been developed with which it is possible to work from any operating system, avoiding the problems of recompilation. E-SpiReS has been used in the study of a number of different systems and two relevant cases are reported to underline the promising applicability of the ICA to complex systems and the importance of similar software tools in handling a laborious protocol. Program summaryProgram title: E-SpiReS Catalogue identifier: AEEM_v1_0 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.0 No. of lines in distributed program, including test data, etc.: 311 761 No. of bytes in distributed program, including test data, etc.: 10 039 531 Distribution format: tar.gz Programming language: C (core programs) and Java (graphical interface) Computer: PC and Macintosh Operating system: Unix and Windows Has the code been vectorized or

  8. Simulation and analysis of Au-MgF2 structure in plasmonic sensor in near infrared spectral region (United States)

    Sharma, Anuj K.


    Plasmonic sensor based on metal-dielectric combination of gold and MgF2 layers is studied in near infrared (NIR) spectral region. An emphasis is given on the effect of variable thickness of MgF2 layer in combination with operating wavelength and gold layer thickness on the sensor's performance in NIR. It is established that the variation in MgF2 thickness in connection with plasmon penetration depth leads to significant variation in sensor's performance. The analysis leads to a conclusion that taking smaller values of MgF2 layer thickness and operating at longer NIR wavelength leads to enhanced sensing performance. Also, fluoride glass can provide better sensing performance than chalcogenide glass and silicon substrate.

  9. Object-oriented approach for gas turbine engine simulation (United States)

    Curlett, Brian P.; Felder, James L.


    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  10. DSNP: a new approach to simulate nuclear power plants

    International Nuclear Information System (INIS)

    Saphier, D.


    The DSNP (Dynamic Simulator for Nuclear Power-plants) is a special purpose block oriented simulation language. It provides for simulations of a large variety of nuclear power plants or various parts of the power plant in a simple straightforward manner. The system is composed of five basic elements, namely, the DSNP language, the precompiler-or the DSNP language translator, the components library, the document generator, and the system data files. The DSNP library of modules includes the selfcontained models of components or physical processes found in a nuclear power plant, and various auxiliary modules such as material properties, control modules, integration schemes, various basic transfer functions etc. In its final form DSNP will have four libraries

  11. [Simulation of approaching with a family of a deceased patient concerning organ donation]. (United States)

    Maroudy, Daniel; Temini, Hechmi; Cazalot, Sylvie; Fernez, Richard; HMaied, Motassem


    The simulation of approaching relating to organ donation consists in reproducing such an interview in conditions similar to those encountered in the intensive care unit. It includes the preparation of the discussion, the way the family is approached, the announcement of the death of their relative, the dialogue with them concerning the donation. Simulation is one of the best ways of training hospital organ donation coordinators in this approach. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. A computer simulation approach to measurement of human control strategy (United States)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III


    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  13. General approach to Monte Carlo simulation of crystal growth

    International Nuclear Information System (INIS)

    Cherepanova, T.A.


    This paper presents a statistical-mechanic description of the techniques valid for simulating crystal growth both in single-component and in multi-component crystals, the work being based on the ideas previously developed. A lattice model of a two-phase crystal-melt system is considered. The authors consider general features of crystallization in metal type bindary systems (growth on atomically rough (001) faces) by using simulation techniques and by solving kinetic equations. They choose as a model those systems with primitive cubic lattice symmetry

  14. Discrete event simulation versus conventional system reliability analysis approaches

    DEFF Research Database (Denmark)

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...

  15. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno


    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  16. Romans vs. Barbarians: A Simulation Approach to Learning. (United States)

    Balaban, Richard


    Sixth-grade students relive the history of the fall of Rome in a simulation game. An example of student comparisons of Roman and Barbarian strategic advantages indicates how the game increased their understanding of the causes of the fall of the Roman Empire. (AM)

  17. A simulation-based optimisation approach to control nitrogen ...

    African Journals Online (AJOL)

    Two operating strategies are investigated by dynamic simulations performed with ASM1: • A fixed aeration tank volume with a fixed MLTSS concentration • A variable aeration volume tank with a variable MLTSS concentration. It is demonstrated that the variable aeration tank volume strategy is more efficient than the fixed ...

  18. Fast 2D Simulation of Superconductors: a Multiscale Approach

    DEFF Research Database (Denmark)

    Rodriguez Zermeno, Victor Manuel; Sørensen, Mads Peter; Pedersen, Niels Falsig


    This work presents a method to calculate AC losses in thin conductors such as the commercially available second generation superconducting wires through a multiscale meshing technique. The main idea is to use large aspect ratio elements to accurately simulate thin material layers. For a single th...

  19. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)


    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  20. Simulation Approach for Timing Analysis of Genetic Logic Circuits

    DEFF Research Database (Denmark)

    Baig, Hasan; Madsen, Jan


    in a manner similar to electronic logic circuits, but they are much more stochastic and hence much harder to characterize. In this article, we introduce an approach to analyze the threshold value and timing of genetic logic circuits. We show how this approach can be used to analyze the timing behavior...... of single and cascaded genetic logic circuits. We further analyze the timing sensitivity of circuits by varying the degradation rates and concentrations. Our approach can be used not only to characterize the timing behavior but also to analyze the timing constraints of cascaded genetic logic circuits...

  1. Simulation

    DEFF Research Database (Denmark)

    Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J


    Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....

  2. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido


    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.

  3. Improving representation of convective transport for scale-aware parameterization: 1. Convection and cloud properties simulated with spectral bin and bulk microphysics: CRM Model Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Yi-Chin [Pacific Northwest National Laboratory, Richland Washington USA; Air Resources Board, Sacramento California USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton Virginia USA; North, Kirk [Department of Atmospheric and Oceanic Sciences, McGill University, Montréal Québec Canada; Collis, Scott [Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Dong, Xiquan [Department of Atmospheric Sciences, University of North Dakota, Grand Forks North Dakota USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego, La Jolla California USA; Chen, Qian [Key Laboratory for Aerosol-Cloud-Precipitation of China Meteorological Administration, Nanjing University of Information Science and Technology, Nanjing China; Kollias, Pavlos [Pacific Northwest National Laboratory, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Richland Washington USA


    The ultimate goal of this study is to improve the representation of convective transport by cumulus parameterization for mesoscale and climate models. As Part 1 of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in midlatitude continent and tropical regions using the Weather Research and Forecasting model with spectral bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation and vertical velocity of convective cores than MOR and MY2 and therefore will be used for analysis of scale dependence of eddy transport in Part 2. The common features of the simulations for all convective systems are (1) themodel tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates Ze in convective cores, especially for the weak updraft velocity; and (3) the model performs better for midlatitude convective systems than the tropical system. The modeled mass fluxes of the midlatitude systems are not sensitive to microphysics schemes but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow, and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes

  4. Aerosol plume transport and transformation in high spectral resolution lidar measurements and WRF-Flexpart simulations during the MILAGRO Field Campaign (United States)

    de Foy, B.; Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.; Wiedinmyer, C.; Molina, L. T.


    The Mexico City Metropolitan Area (MCMA) experiences high loadings of atmospheric aerosols from anthropogenic sources, biomass burning and wind-blown dust. This paper uses a combination of measurements and numerical simulations to identify different plumes affecting the basin and to characterize transformation inside the plumes. The High Spectral Resolution Lidar on board the NASA LaRC B-200 King Air aircraft measured extinction coefficients and extinction to backscatter ratio at 532 nm, and backscatter coefficients and depolarization ratios at 532 and 1064 nm. These can be used to identify aerosol types. The measurement curtains are compared with particle trajectory simulations using WRF-Flexpart for different source groups. The good correspondence between measurements and simulations suggests that the aerosol transport is sufficiently well characterized by the models to estimate aerosol types and ages. Plumes in the basin undergo complex transport, and are frequently mixed together. Urban aerosols are readily identifiable by their low depolarization ratios and high lidar ratios, and dust by the opposite properties. Fresh biomass burning plumes have very low depolarization ratios which increase rapidly with age. This rapid transformation is consistent with the presence of atmospheric tar balls in the fresh plumes.

  5. Aerosol plume transport and transformation in high spectral resolution lidar measurements and WRF-Flexpart simulations during the MILAGRO Field Campaign

    Directory of Open Access Journals (Sweden)

    B. de Foy


    Full Text Available The Mexico City Metropolitan Area (MCMA experiences high loadings of atmospheric aerosols from anthropogenic sources, biomass burning and wind-blown dust. This paper uses a combination of measurements and numerical simulations to identify different plumes affecting the basin and to characterize transformation inside the plumes. The High Spectral Resolution Lidar on board the NASA LaRC B-200 King Air aircraft measured extinction coefficients and extinction to backscatter ratio at 532 nm, and backscatter coefficients and depolarization ratios at 532 and 1064 nm. These can be used to identify aerosol types. The measurement curtains are compared with particle trajectory simulations using WRF-Flexpart for different source groups. The good correspondence between measurements and simulations suggests that the aerosol transport is sufficiently well characterized by the models to estimate aerosol types and ages. Plumes in the basin undergo complex transport, and are frequently mixed together. Urban aerosols are readily identifiable by their low depolarization ratios and high lidar ratios, and dust by the opposite properties. Fresh biomass burning plumes have very low depolarization ratios which increase rapidly with age. This rapid transformation is consistent with the presence of atmospheric tar balls in the fresh plumes.

  6. A multiscale approach to accelerate pore-scale simulation of porous electrodes (United States)

    Zheng, Weibo; Kim, Seung Hyun


    A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.

  7. SNR and BER Models and the Simulation for BER Performance of Selected Spectral Amplitude Codes for OCDMA

    Directory of Open Access Journals (Sweden)

    Abdul Latif Memon


    Full Text Available Many encoding schemes are used in OCDMA (Optical Code Division Multiple Access Network but SAC (Spectral Amplitude Codes is widely used. It is considered an effective arrangement to eliminate dominant noise called MAI (Multi Access Interference. Various codes are studied for evaluation with respect to their performance against three noises namely shot noise, thermal noise and PIIN (Phase Induced Intensity Noise. Various Mathematical models for SNR (Signal to Noise Ratios and BER (Bit Error Rates are discussed where the SNRs are calculated and BERs are computed using Gaussian distribution assumption. After analyzing the results mathematically, it is concluded that ZCC (Zero Cross Correlation Code performs better than the other selected SAC codes and can serve larger number of active users than the other codes do. At various receiver power levels, analysis points out that RDC (Random Diagonal Code also performs better than the other codes. For the power interval between -10 and -20 dBm performance of RDC is better ZCC. Their lowest BER values suggest that these codes should be part of an efficient and cost effective OCDM access network in the future.

  8. A simulation approach to decision making in IT service strategy. (United States)

    Orta, Elena; Ruiz, Mercedes


    We propose to use simulation modeling to support decision making in IT service strategy scope. Our main contribution is a simulation model that helps service providers analyze the consequences of changes in both the service capacity assigned to their customers and the tendency of service requests received on the fulfillment of a business rule associated with the strategic goal of customer satisfaction. This business rule is set in the SLAs that service provider and its customers agree to, which determine the maximum percentage of service requests that are permitted to be abandoned because they have exceeded the waiting time allowed. To illustrate the use and applications of the model, we include some of the experiments conducted and describe our conclusions.

  9. Evaluating Asset Pricing Models in a Simulated Multifactor Approach

    Directory of Open Access Journals (Sweden)

    Wagner Piazza Gaglianone


    Full Text Available In this paper a methodology to compare the performance of different stochastic discount factor (SDF models is suggested. The starting point is the estimation of several factor models in which the choice of the fundamental factors comes from different procedures. Then, a Monte Carlo simulation is designed in order to simulate a set of gross returns with the objective of mimicking the temporal dependency and the observed covariance across gross returns. Finally, the artificial returns are used to investigate the performance of the competing asset pricing models through the Hansen and Jagannathan (1997 distance and some goodness-of-fit statistics of the pricing error. An empirical application is provided for the U.S. stock market.

  10. An Experimental Approach to Simulations of the CLIC Interaction Point

    DEFF Research Database (Denmark)

    Esberg, Jakob


    an understanding of definitions of processes and quantities that will be used throughout the rest of the thesis. The 7th chapter focuses on the parts of my work that is related to experiment. The main topic is the NA63 Trident experiment which will be discussed in detail. Results of the crystalline undulator...... to theoretical ones, and simulations are applied to the 3 TeV CLIC scenario. Here, experimentally based conclusions on the applicability of the theory for strong field production of pairs will be made. In the chapter on depolarization, simulations of the beam-beam depolarization will be presented. The chapter...... describes the details of the depolarization algorithm and the strong-field modifications to the theory. New results on the energy dependence of the luminosity weighted depolarization are presented. Here, possible schemes for spin measurements are presented and the relevance of these measurements...

  11. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi


    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  12. Designing intelligent computer-based simulations: a pragmatic approach

    Directory of Open Access Journals (Sweden)

    Bernard M. Garrett


    Full Text Available There has been great interest in the potential use of multimedia computer-based learning (CBL packages within higher education. The effectiveness of such systems, however, remains controversial. There are suggestions that such multimedia applications may hold no advantage over traditional formats (Barron and Atkins, 1994; Ellis, 1994; Laurillard, 1995; Simms, 1997; Leibowitz, 1999. One area where multimedia CBL may still prove its value is in the simulation of activities where experiential learning is expensive, undesirable or even dangerous.

  13. Approach for valuating the influence of laboratory simulation. (United States)

    Rosentritt, Martin; Behr, Michael; van der Zel, Jef M; Feilzer, Albert J


    The aim of this investigation was to determine the fracture resistance of zirconia fixed partial dentures (FPDs) after laboratory simulation. Failure type and failure rates during simulation were compared to available clinical data for estimating the relevance of the simulation. 32 FPDs were fabricated of a zirconia ceramic and a corresponding ceramic veneer. The FPDs were adhesively bonded on human molars and artificial aging was performed for investigating the survival rate during thermal cycling and mechanical loading (TCML1; 3.6Mio x 50N ML). Survival rates were compared to available clinical data and the TCML parameter "mastication force" was adapted accordingly for a second TCML run (TCML2; 3.6Mio x 100N ML). The fracture resistance of the FPDs which survived TCML was determined. FPDs were examined without TCML (control) or after TCML according to literature (1.2Mio x 50N ML). Data were statistically analyzed (Mann-Whitney U-test) and curve fitting/regression analysis of the survival rates was performed. TCML reduced survival rates down to 63%. Failures during TCML were chipping off of the veneering ceramic, no zirconia framework was damaged. Under clinical conditions comparable failures (chipping) are reported. The clinical survival rate (approximately 10%) is lower compared to TCML data because of the short period of observation. The fracture resistance after TCML was significantly reduced from 1058N (control) to values between 320 and 533N. The results indicate that TCML with 1.2Mio x 50N provides a sufficient explanatory power. TCML with prolonged simulation time may allow the definition of a mathematical model for estimating future survival rates.

  14. A new approach to mixing length theory of convection for spherically symmetric supernova simulations (United States)

    Warren, Mackenzie; Couch, Sean


    We have developed a new approach to the mixing length theory of convection for use in spherically symmetric core-collapse supernova simulations. This approach is based on the results of multidimensional simulations with the goal of more accurately reproducing successful explosions, the composition and thermodynamic variables in regions where nucleosynthesis occurs, and observed quantities such as neutrino luminosities and energies. We compare this approach with standard mixing length theory and the results of multidimensional supernova simulations and discuss prospects for systematic studies of the nuclear equation of state and heavy element nucleosynthesis in core-collapse supernovae.

  15. Molecular dynamics simulation for PBR pebble tracking simulation via a random walk approach using Monte Carlo simulation. (United States)

    Lee, Kyoung O; Holmes, Thomas W; Calderon, Adan F; Gardner, Robin P


    Using a Monte Carlo (MC) simulation, random walks were used for pebble tracking in a two-dimensional geometry in the presence of a biased gravity field. We investigated the effect of viscosity damping in the presence of random Gaussian fluctuations. The particle tracks were generated by Molecular Dynamics (MD) simulation for a Pebble Bed Reactor. The MD simulations were conducted in the interaction of noncohesive Hertz-Mindlin theory where the random walk MC simulation has a correlation with the MD simulation. This treatment can easily be extended to include the generation of transient gamma-ray spectra from a single pebble that contains a radioactive tracer. Then the inverse analysis thereof could be made to determine the uncertainty of the realistic measurement of transient positions of that pebble by any given radiation detection system designed for that purpose. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. A Simulational approach to teaching statistical mechanics and kinetic theory

    International Nuclear Information System (INIS)

    Karabulut, H.


    A computer simulation demonstrating how Maxwell-Boltzmann distribution is reached in gases from a nonequilibrium distribution is presented. The algorithm can be generalized to the cases of gas particles (atoms or molecules) with internal degrees of freedom such as electronic excitations and vibrational-rotational energy levels. Another generalization of the algorithm is the case of mixture of two different gases. By choosing the collision cross sections properly one can create quasi equilibrium distributions. For example by choosing same atom cross sections large and different atom cross sections very small one can create mixture of two gases with different temperatures where two gases slowly interact and come to equilibrium in a long time. Similarly, for the case one kind of atom with internal degrees of freedom one can create situations that internal degrees of freedom come to the equilibrium much later than translational degrees of freedom. In all these cases the equilibrium distribution that the algorithm gives is the same as expected from the statistical mechanics. The algorithm can also be extended to cover the case of chemical equilibrium where species A and B react to form AB molecules. The laws of chemical equilibrium can be observed from this simulation. The chemical equilibrium simulation can also help to teach the elusive concept of chemical potential

  17. A new approach to flow simulation using hybrid models (United States)

    Solgi, Abazar; Zarei, Heidar; Nourani, Vahid; Bahmani, Ramin


    The necessity of flow prediction in rivers, for proper management of water resource, and the need for determining the inflow to the dam reservoir, designing efficient flood warning systems and so forth, have always led water researchers to think about models with high-speed response and low error. In the recent years, the development of Artificial Neural Networks and Wavelet theory and using the combination of models help researchers to estimate the river flow better and better. In this study, daily and monthly scales were used for simulating the flow of Gamasiyab River, Nahavand, Iran. The first simulation was done using two types of ANN and ANFIS models. Then, using wavelet theory and decomposing input signals of the used parameters, sub-signals were obtained and were fed into the ANN and ANFIS to obtain hybrid models of WANN and WANFIS. In this study, in addition to the parameters of precipitation and flow, parameters of temperature and evaporation were used to analyze their effects on the simulation. The results showed that using wavelet transform improved the performance of the models in both monthly and daily scale. However, it had a better effect on the monthly scale and the WANFIS was the best model.

  18. Exact Green's function renormalization approach to spectral properties of open quantum systems driven by harmonically time-dependent fields (United States)

    Arrachea, Liliana


    We present an efficient method and a fast algorithm to exactly calculate spectral functions and one-body observables of open quantum systems described by lattice Hamiltonians with harmonically time-dependent terms and without many-body interactions. The theoretical treatment is based in Keldysh nonequilibrium Green’s function formalism. We illustrate the implementation of the technique in a paradigmatic model of a quantum pump driven by local fields oscillating in time with one and two harmonic components.

  19. Statistical analysis of modal parameters of a suspension bridge based on Bayesian spectral density approach and SHM data (United States)

    Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli


    Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.

  20. Spectral Classification of Galaxies at 0.5 <= z <= 1 in the CDFS: The Artificial Neural Network Approach (United States)

    Teimoorinia, H.


    The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 MUSIC catalog covering bands between ~0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ~1200-7500 Å. Then a series of neural networks are trained, with average performance ~90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.

  1. ESD full chip simulation: HBM and CDM requirements and simulation approach

    Directory of Open Access Journals (Sweden)

    E. Franell


    Full Text Available Verification of ESD safety on full chip level is a major challenge for IC design. Especially phenomena with their origin in the overall product setup are posing a hurdle on the way to ESD safe products. For stress according to the Charged Device Model (CDM, a stumbling stone for a simulation based analysis is the complex current distribution among a huge number of internal nodes leading to hardly predictable voltage drops inside the circuits.

    This paper describes an methodology for Human Body Model (HBM simulations with an improved ESD-failure coverage and a novel methodology to replace capacitive nodes within a resistive network by current sources for CDM simulation. This enables a highly efficient DC simulation clearly marking CDM relevant design weaknesses allowing for application of this software both during product development and for product verification.

  2. Advances in the U.S. Navy Non-hydrostatic Unified Model of the Atmosphere (NUMA): LES as a Stabilization Methodology for High-Order Spectral Elements in the Simulation of Deep Convection (United States)

    Marras, Simone; Giraldo, Frank


    The prediction of extreme weather sufficiently ahead of its occurrence impacts society as a whole and coastal communities specifically (e.g. Hurricane Sandy that impacted the eastern seaboard of the U.S. in the fall of 2012). With the final goal of solving hurricanes at very high resolution and numerical accuracy, we have been developing the Non-hydrostatic Unified Model of the Atmosphere (NUMA) to solve the Euler and Navier-Stokes equations by arbitrary high-order element-based Galerkin methods on massively parallel computers. NUMA is a unified model with respect to the following criteria: (a) it is based on unified numerics in that element-based Galerkin methods allow the user to choose between continuous (spectral elements, CG) or discontinuous Galerkin (DG) methods and from a large spectrum of time integrators, (b) it is unified across scales in that it can solve flow in limited-area mode (flow in a box) or in global mode (flow on the sphere). NUMA is the dynamical core that powers the U.S. Naval Research Laboratory's next-generation global weather prediction system NEPTUNE (Navy's Environmental Prediction sysTem Utilizing the NUMA corE). Because the solution of the Euler equations by high order methods is prone to instabilities that must be damped in some way, we approach the problem of stabilization via an adaptive Large Eddy Simulation (LES) scheme meant to treat such instabilities by modeling the sub-grid scale features of the flow. The novelty of our effort lies in the extension to high order spectral elements for low Mach number stratified flows of a method that was originally designed for low order, adaptive finite elements in the high Mach number regime [1]. The Euler equations are regularized by means of a dynamically adaptive stress tensor that is proportional to the residual of the unperturbed equations. Its effect is close to none where the solution is sufficiently smooth, whereas it increases elsewhere, with a direct contribution to the

  3. A domain-reduction approach to bridging-scale simulation of one-dimensional nanostructures (United States)

    Qian, Dong; Phadke, Manas; Karpov, Eduard; Liu, Wing Kam


    We present a domain-reduction approach for the simulation of one-dimensional nanocrystalline structures. In this approach, the domain of interest is partitioned into coarse and fine scale regions and the coupling between the two is implemented through a bridging-scale interfacial boundary condition. The atomistic simulation is used in the fine scale region, while the discrete Fourier transform is applied to the coarse scale region to yield a compact Green's function formulation that represents the effects of the coarse scale domain upon the fine/coarse scale interface. This approach facilitates the simulations for the fine scale, without the requirement to simulate the entire coarse scale domain. After the illustration in a simple 1D problem and comparison with analytical solutions, the proposed method is then implemented for carbon nanotube structures. The robustness of the proposed multiscale method is demonstrated after comparison and verification of our results with benchmark results from fully atomistic simulations.

  4. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen


    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  5. Modern approaches to accelerator simulation and on-line control

    International Nuclear Information System (INIS)

    Lee, M.; Clearwater, S.; Theil, E.; Paxson, V.


    COMFORT-PLUS consists of three parts: (1) COMFORT (Control Of Machine Function, ORbits, and Trajectories), which computes the machine lattice functions and transport matrices along a beamline; (2) PLUS (Prediction from Lattice Using Simulation) which finds or compensates for errors in the beam parameters or machine elements; and (3) a highly graphical interface to PLUS. The COMFORT-PLUS package has been developed on a SUN-3 workstation. The structure and use of COMFORT-PLUS are described, and an example of the use of the package is presented

  6. PKA spectral effects on subcascade structures and free defect survival ratio as estimated by cascade-annealing computer simulation

    International Nuclear Information System (INIS)

    Muroga, Takeo


    The free defect survival ratio is calculated by ''cascade-annealing'' computer simulation using the MARLOWE and modified DAIQUIRI codes in various cases of Primary Knock-on Atom (PKA) spectra. The number of subcascades is calculated by ''cut-off'' calculation using MARLOWE. The adequacy of these methods is checked by comparing the results with experiments (surface segregation measurements and Transmission Electron Microscope cascade defect observations). The correlation using the weighted average recoil energy as a parameter shows that the saturation of the free defect survival ratio at high PKA energies has a close relation to the cascade splitting into subcascades. (author)

  7. Fullerene C-70 characterization by C-13 NMR and the importance of the solvent and dynamics in spectral simulations

    Czech Academy of Sciences Publication Activity Database

    Kaminský, Jakub; Buděšínský, Miloš; Taubert, S.; Bouř, Petr; Straka, Michal


    Roč. 15, č. 23 (2013), s. 9223-9230 ISSN 1463-9076 R&D Projects: GA ČR GA13-03978S; GA ČR GPP208/10/P356; GA ČR GAP208/11/0105; GA MŠk(CZ) LH11033; GA ČR GA203/09/2037 Grant - others:AV ČR(CZ) M200551205 Institutional support: RVO:61388963 Keywords : fullerene * NMR * simulations * DFT Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.198, year: 2013

  8. A spectral geometric model for Compton single scatter in PET based on the single scatter simulation approximation

    DEFF Research Database (Denmark)

    Kazantsev, I.G.; Olsen, Ulrik Lund; Poulsen, Henning Friis


    scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations...... are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented....

  9. Wave propagation simulation in the upper core of sodium-cooled fast reactors using a spectral-element method for heterogeneous media (United States)

    Nagaso, Masaru; Komatitsch, Dimitri; Moysan, Joseph; Lhuillier, Christian


    ASTRID project, French sodium cooled nuclear reactor of 4th generation, is under development at the moment by Alternative Energies and Atomic Energy Commission (CEA). In this project, development of monitoring techniques for a nuclear reactor during operation are identified as a measure issue for enlarging the plant safety. Use of ultrasonic measurement techniques (e.g. thermometry, visualization of internal objects) are regarded as powerful inspection tools of sodium cooled fast reactors (SFR) including ASTRID due to opacity of liquid sodium. In side of a sodium cooling circuit, heterogeneity of medium occurs because of complex flow state especially in its operation and then the effects of this heterogeneity on an acoustic propagation is not negligible. Thus, it is necessary to carry out verification experiments for developments of component technologies, while such kind of experiments using liquid sodium may be relatively large-scale experiments. This is why numerical simulation methods are essential for preceding real experiments or filling up the limited number of experimental results. Though various numerical methods have been applied for a wave propagation in liquid sodium, we still do not have a method for verifying on three-dimensional heterogeneity. Moreover, in side of a reactor core being a complex acousto-elastic coupled region, it has also been difficult to simulate such problems with conventional methods. The objective of this study is to solve these 2 points by applying three-dimensional spectral element method. In this paper, our initial results on three-dimensional simulation study on heterogeneous medium (the first point) are shown. For heterogeneity of liquid sodium to be considered, four-dimensional temperature field (three spatial and one temporal dimension) calculated by computational fluid dynamics (CFD) with Large-Eddy Simulation was applied instead of using conventional method (i.e. Gaussian Random field). This three-dimensional numerical

  10. Using a Competitive Approach to Improve Military Simulation Artificial Intelligence Design

    National Research Council Canada - National Science Library

    Stoykov, Sevdalin


    ...) design can lead to improvement of the AI solutions used in military simulations. To demonstrate the potential of the competitive approach, ORTS, a real-time strategy game engine, and its competition setup are used...

  11. Simulator-based Transesophageal Echocardiographic Training with Motion Analysis A Curriculum-based Approach

    NARCIS (Netherlands)

    Matyal, Robina; Mitchell, John D.; Hess, Philip E.; Chaudary, Bilal; Bose, Ruma; Jainandunsing, Jayant S.; Wong, Vanessa; Mahmood, Feroze

    Background: Transesophageal echocardiography (TEE) is a complex endeavor involving both motor and cognitive skills. Current training requires extended time in the clinical setting. Application of an integrated approach for TEE training including simulation could facilitate acquisition of skills and

  12. A derivative-free approach for a simulation-based optimization problem in healthcare


    Lucidi, Stefano; Maurici, Massimo; Paulon, Luca; Rinaldi, Francesco; Roma, Massimo


    In this work a simulation-based optimization model is considered in the framework of the management of hospital services. Given specific parameters which describe the hospital setting, the simulation model aims at reproducing the hospital processes and evaluating their efficiency. The use of a simulation-based optimization approach is necessary since the model can not be expressed as closed–form function. In order to obtain the optimal setting, we combine a derivative-free optimization meth...

  13. A Systemic-Constructivist Approach to the Facilitation and Debriefing of Simulations and Games (United States)

    Kriz, Willy Christian


    This article introduces some basic concepts of a systemic-constructivist perspective. These show that gaming simulation corresponds closely to a systemic-constructivist approach to learning and instruction. Some quality aspects of facilitating and debriefing simulation games are described from a systemic-constructivist point of view. Finally, a…

  14. Toward Simulating Realistic Pursuit-Evasion Using a Roadmap-Based Approach

    KAUST Repository

    Rodriguez, Samuel


    In this work, we describe an approach for modeling and simulating group behaviors for pursuit-evasion that uses a graph-based representation of the environment and integrates multi-agent simulation with roadmap-based path planning. We demonstrate the utility of this approach for a variety of scenarios including pursuit-evasion on terrains, in multi-level buildings, and in crowds. © 2010 Springer-Verlag Berlin Heidelberg.

  15. Statistical Approaches to Aerosol Dynamics for Climate Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Wei


    In this work, we introduce two general non-parametric regression analysis methods for errors-in-variable (EIV) models: the compound regression, and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer a practical solution when the ratio of the error variances is unknown. Each includes the classic non-parametric regression methods of ordinary least squares, geometric mean regression, and orthogonal regression as special cases. Both methods can be readily generalized to multiple linear regression with two or more random regressors.


    International Nuclear Information System (INIS)

    Teimoorinia, H.


    The aim of this work is to combine spectral energy distribution (SED) fitting with artificial neural network techniques to assign spectral characteristics to a sample of galaxies at 0.5 < z < 1. The sample is selected from the spectroscopic campaign of the ESO/GOODS-South field, with 142 sources having photometric data from the GOODS-MUSIC catalog covering bands between ∼0.4 and 24 μm in 10-13 filters. We use the CIGALE code to fit photometric data to Maraston's synthesis spectra to derive mass, specific star formation rate, and age, as well as the best SED of the galaxies. We use the spectral models presented by Kinney et al. as targets in the wavelength interval ∼1200-7500 Å. Then a series of neural networks are trained, with average performance ∼90%, to classify the best SED in a supervised manner. We consider the effects of the prominent features of the best SED on the performance of the trained networks and also test networks on the galaxy spectra of Coleman et al., which have a lower resolution than the target models. In this way, we conclude that the trained networks take into account all the features of the spectra simultaneously. Using the method, 105 out of 142 galaxies of the sample are classified with high significance. The locus of the classified galaxies in the three graphs of the physical parameters of mass, age, and specific star formation rate appears consistent with the morphological characteristics of the galaxies.

  17. Soil moisture simulations using two different modelling approaches

    Czech Academy of Sciences Publication Activity Database

    Šípek, Václav; Tesař, Miroslav


    Roč. 64, 3-4 (2013), s. 99-103 ISSN 0006-5471 R&D Projects: GA AV ČR IAA300600901; GA ČR GA205/08/1174 Institutional research plan: CEZ:AV0Z20600510 Keywords : soil moisture modelling * SWIM model * box modelling approach Subject RIV: DA - Hydrology ; Limnology

  18. An efficient numerical approach to electrostatic microelectromechanical system simulation

    International Nuclear Information System (INIS)

    Pu, Li


    Computational analysis of electrostatic microelectromechanical systems (MEMS) requires an electrostatic analysis to compute the electrostatic forces acting on micromechanical structures and a mechanical analysis to compute the deformation of micromechanical structures. Typically, the mechanical analysis is performed on an undeformed geometry. However, the electrostatic analysis is performed on the deformed position of microstructures. In this paper, a new efficient approach to self-consistent analysis of electrostatic MEMS in the small deformation case is presented. In this approach, when the microstructures undergo small deformations, the surface charge densities on the deformed geometry can be computed without updating the geometry of the microstructures. This algorithm is based on the linear mode shapes of a microstructure as basis functions. A boundary integral equation for the electrostatic problem is expanded into a Taylor series around the undeformed configuration, and a new coupled-field equation is presented. This approach is validated by comparing its results with the results available in the literature and ANSYS solutions, and shows attractive features comparable to ANSYS. (general)

  19. Application of cellular automata approach for cloud simulation and rendering

    International Nuclear Information System (INIS)

    Christopher Immanuel, W.; Paul Mary Deborrah, S.; Samuel Selvaraj, R.


    Current techniques for creating clouds in games and other real time applications produce static, homogenous clouds. These clouds, while viable for real time applications, do not exhibit an organic feel that clouds in nature exhibit. These clouds, when viewed over a time period, were able to deform their initial shape and move in a more organic and dynamic way. With cloud shape technology we should be able in the future to extend to create even more cloud shapes in real time with more forces. Clouds are an essential part of any computer model of a landscape or an animation of an outdoor scene. A realistic animation of clouds is also important for creating scenes for flight simulators, movies, games, and other. Our goal was to create a realistic animation of clouds


    Directory of Open Access Journals (Sweden)

    Antonie Van Rensburg


    Full Text Available

    ENGLISH ABSTRACT: To manage problems , is to try and cope with a flux of interacting events and ideas which unrolls through time - with the manager trying to improve situations seen as problematical, or at least as less than perfect. The ability of managing or solving these problems depends on the skills of the problem solver to analyse problems. This article introduces and discusses a proposed methodology for analysing real world problems in order to construct valid simulation models.

    AFRIKAANSE OPSOMMING: Bestuurders probeer probleemsituasies bestuur, of verbeter deur 'n vloed van dinamiese interaktiewe gebeurtenisses te verstaan en te hanteer. Die sukses van die bestuur of oplos .van die probleme hang af van die kundigheid van die probleemoplosser om die probleme te kan analiseer. Die artikel bespreek 'n voorgestelde benadering tot die analise van probleme am sodoende daaruit , simulasiemodelle te kan opstel.

  1. Comparative evaluation of photovoltaic MPP trackers: A simulated approach

    Directory of Open Access Journals (Sweden)

    Barnam Jyoti Saharia


    Full Text Available This paper makes a comparative assessment of three popular maximum power point tracking (MPPT algorithms used in photovoltaic power generation. A 120 Wp PV module is taken as reference for the study that is connected to a suitable resistive load by a boost converter. Two profiles of variation of solar insolation at fixed temperature and varying temperature at fixed solar insolation are taken to test the tracking efficiency of three MPPT algorithms based on the perturb and observe (P&O, Fuzzy logic, and Neural Network techniques. MATLAB/SIMULINK simulation software is used for assessment, and the results indicate that the fuzzy logic-based tracker presents better tracking effectiveness to variations in both solar insolation and temperature profiles when compared to P&O technique and Neural Network-based technique.

  2. Battery Performance Modelling ad Simulation: a Neural Network Based Approach (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro


    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  3. A simulated annealing approach for redesigning a warehouse network problem (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia


    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  4. A spectral geometric model for Compton single scatter in PET based on the single scatter simulation approximation (United States)

    Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.


    We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.

  5. Novel Approaches to Spectral Properties of Correlated Electron Materials: From Generalized Kohn-Sham Theory to Screened Exchange Dynamical Mean Field Theory (United States)

    Delange, Pascal; Backes, Steffen; van Roekeghem, Ambroise; Pourovskii, Leonid; Jiang, Hong; Biermann, Silke


    The most intriguing properties of emergent materials are typically consequences of highly correlated quantum states of their electronic degrees of freedom. Describing those materials from first principles remains a challenge for modern condensed matter theory. Here, we review, apply and discuss novel approaches to spectral properties of correlated electron materials, assessing current day predictive capabilities of electronic structure calculations. In particular, we focus on the recent Screened Exchange Dynamical Mean-Field Theory scheme and its relation to generalized Kohn-Sham Theory. These concepts are illustrated on the transition metal pnictide BaCo2As2 and elemental zinc and cadmium.

  6. Solitary-wave emission fronts, spectral chirping, and coupling to beam acoustic modes in RPIC simulation of SRS backscatter.

    Energy Technology Data Exchange (ETDEWEB)

    DuBois, D. F. (Donald F.); Yin, L. (Lin); Daughton, W. S. (William S.); Bezzerides, B. (Bandel); Dodd, E. S. (Evan S.); Vu, H. X. (Hoanh X.)


    Detailed diagnostics of quasi-2D RPIC simulations of backward stimulated Raman scattering (BSRS), from single speckles under putative NIF conditions, reveal a complex spatio-temporal behavior. The scattered light consists of localized packets, tens of microns in width, traveling toward the laser at an appreciable fraction of the speed of light. Sub pico-second reflectivity pulses occur as these packets leave the system. The LW activity consists of a front traveling with the light packets with a wake of free LWs traveling in the laser direction. The parametric coupling occurs in the front where the scattered light and LW overlap and are strongest. As the light leaves the plasma the LW quickly decays, liberating its trapped electrons. The high frequency part of the |n{sub e}(k,{omega})|{sup 2} spectrum, where n{sub e} is the electron density fluctuation, consists of a narrow streak or straight line with a slope that is the velocity of the parametric front. The time dependence of |n{sub e}(k,t)|{sup 2}, shows that during each pulse the most intense value of k also 'chirps' to higher values, consistent with the k excursions seen in the |n{sub e}(k,{omega})|{sup 2} spectrum. But k does not always return, in the subsequent pulses, to the original parametrically matched value, indicating that, in spite of side loss, the electron distribution function does not return to its original Maxwellian form. Liberated pulses of hot electrons result in down-stream, bump on tail distributions that excite LWs and beam acoustic modes deeper in the plasma. The frequency broadened spectra are consistent with Thomson scatter spectra observed in TRIDENT single-hot-spot experiments in the high k{lambda}{sub D}, trapping regime. Further details including a comparison of results from full PIC simulations, and movies of the spatio-temporal behavior, will be given in the poster by L Yin et al.

  7. Angular spectrum approach for fast simulation of pulsed non-linear ultrasound fields

    DEFF Research Database (Denmark)

    Du, Yigang; Jensen, Henrik; Jensen, Jørgen Arendt


    The paper presents an Angular Spectrum Approach (ASA) for simulating pulsed non-linear ultrasound fields. The source of the ASA is generated by Field II, which can simulate array transducers of any arbitrary geometry and focusing. The non-linear ultrasound simulation program - Abersim, is used...... as the reference. A linear array transducer with 64 active elements is simulated by both Field II and Abersim. The excitation is a 2-cycle sine wave with a frequency of 5 MHz. The second harmonic field in the time domain is simulated using ASA. Pulse inversion is used in the Abersim simulation to remove...... the fundamental and keep the second harmonic field, since Abersim simulates non-linear fields with all harmonic components. ASA and Abersim are compared for the pulsed fundamental and second harmonic fields in the time domain at depths of 30 mm, 40 mm (focal depth) and 60 mm. Full widths at -6 dB (FWHM) are f0...

  8. Simulation of breaking waves using the high-order spectral method with laboratory experiments: wave-breaking energy dissipation (United States)

    Seiffert, Betsy R.; Ducrozet, Guillaume


    We examine the implementation of a wave-breaking mechanism into a nonlinear potential flow solver. The success of the mechanism will be studied by implementing it into the numerical model HOS-NWT, which is a computationally efficient, open source code that solves for the free surface in a numerical wave tank using the high-order spectral (HOS) method. Once the breaking mechanism is validated, it can be implemented into other nonlinear potential flow models. To solve for wave-breaking, first a wave-breaking onset parameter is identified, and then a method for computing wave-breaking associated energy loss is determined. Wave-breaking onset is calculated using a breaking criteria introduced by Barthelemy et al. (J Fluid Mech, submitted) and validated with the experiments of Saket et al. (J Fluid Mech 811:642-658, 2017). Wave-breaking energy dissipation is calculated by adding a viscous diffusion term computed using an eddy viscosity parameter introduced by Tian et al. (Phys Fluids 20(6): 066,604, 2008, Phys Fluids 24(3), 2012), which is estimated based on the pre-breaking wave geometry. A set of two-dimensional experiments is conducted to validate the implemented wave breaking mechanism at a large scale. Breaking waves are generated by using traditional methods of evolution of focused waves and modulational instability, as well as irregular breaking waves with a range of primary frequencies, providing a wide range of breaking conditions to validate the solver. Furthermore, adjustments are made to the method of application and coefficient of the viscous diffusion term with negligible difference, supporting the robustness of the eddy viscosity parameter. The model is able to accurately predict surface elevation and corresponding frequency/amplitude spectrum, as well as energy dissipation when compared with the experimental measurements. This suggests the model is capable of calculating wave-breaking onset and energy dissipation

  9. Simulation of breaking waves using the high-order spectral method with laboratory experiments: Wave-breaking onset (United States)

    Seiffert, Betsy R.; Ducrozet, Guillaume; Bonnefoy, Félicien


    This study investigates a wave-breaking onset criteria to be implemented in the non-linear potential flow solver HOS-NWT. The model is a computationally efficient, open source code, which solves for the free surface in a numerical wave tank using the High-Order Spectral (HOS) method. The goal of this study is to determine the best method to identify the onset of random single and multiple breaking waves over a large domain at the exact time they occur. To identify breaking waves, a breaking onset criteria based on the ratio of local energy flux velocity to the local crest velocity, introduced by Barthelemy et al. (2017) is selected. The breaking parameter is uniquely applied in the numerical model in that calculations of the breaking onset criteria ratio are not made only at the location of the wave crest, but at every point in the domain and at every time step. This allows the model to calculate the onset of a breaking wave the moment it happens, and without knowing anything about the wave a priori. The application of the breaking criteria at every point in the domain and at every time step requires the phase velocity to be calculated instantaneously everywhere in the domain and at every time step. This is achieved by calculating the instantaneous phase velocity using the Hilbert transform and dispersion relation. A comparison between more traditional crest-tracking techniques shows the calculation of phase velocity using Hilbert transform at the location of the breaking wave crest provides a good approximation of crest velocity. The ability of the selected wave breaking criteria to predict single and multiple breaking events in two dimensions is validated by a series of large-scale experiments. Breaking waves are generated by energy focusing and modulational instability methods, with a wide range of primary frequencies. Steep irregular waves which lead to breaking waves, and irregular waves with an energy focusing wave superimposed are also generated. This set of

  10. A novel approach to detect respiratory phases from pulmonary acoustic signals using normalised power spectral density and fuzzy inference system. (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S


    Monitoring respiration is important in several medical applications. One such application is respiratory rate monitoring in patients with sleep apnoea. The respiratory rate in patients with sleep apnoea disorder is irregular compared with the controls. Respiratory phase detection is required for a proper monitoring of respiration in patients with sleep apnoea. To develop a model to detect the respiratory phases present in the pulmonary acoustic signals and to evaluate the performance of the model in detecting the respiratory phases. Normalised averaged power spectral density for each frame and change in normalised averaged power spectral density between the adjacent frames were fuzzified and fuzzy rules were formulated. The fuzzy inference system (FIS) was developed with both Mamdani and Sugeno methods. To evaluate the performance of both Mamdani and Sugeno methods, correlation coefficient and root mean square error (RMSE) were calculated. In the correlation coefficient analysis in evaluating the fuzzy model using Mamdani and Sugeno method, the strength of the correlation was found to be r = 0.9892 and r = 0.9964, respectively. The RMSE for Mamdani and Sugeno methods are RMSE = 0.0853 and RMSE = 0.0817, respectively. The correlation coefficient and the RMSE of the proposed fuzzy models in detecting the respiratory phases reveals that Sugeno method performs better compared with the Mamdani method. © 2014 John Wiley & Sons Ltd.

  11. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro


    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  12. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories (United States)

    Ng, Hok Kwan; Sridhar, Banavar


    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  13. Simulating the directional, spectral and textural properties of a large-scale scene at high resolution using a MODIS BRDF product (United States)

    Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.


    Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying

  14. Game-Enhanced Simulation as an Approach to Experiential Learning in Business English (United States)

    Punyalert, Sansanee


    This dissertation aims to integrate various learning approaches, i.e., multiple literacies, experiential learning, game-enhanced learning, and global simulation, into an extracurricular module, in which it remodels traditional ways of teaching input, specifically, the lexical- and grammatical-only approaches of business English at a private…

  15. KMsim: A Meta-modelling Approach and Environment for Creating Process-Oriented Knowledge Management Simulations.

    NARCIS (Netherlands)

    Anjewierden, Anjo Allert; Shostak, I.; Tsjernikova, Irina; de Hoog, Robert; Gómez Perez, A.; Benjamins, V.R.


    This paper presents a new approach to modelling process-oriented knowledge management (KM) and describes a simulation environment (called KMSIM) that embodies the approach. Since the beginning of modelling researchers have been looking for better and novel ways to model systems and to use

  16. A Cost-Effective Approach to Hardware-in-the-Loop Simulation

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Melters; Hansen, M. R.; Ballebye, M.


    This paper presents an approach for developing cost effective hardware-in-the- loop (HIL) simulation platforms for the use in controller software test and development. The approach is aimed at the many smaller manufacturers of e.g. mobile hydraulic machinery, which often do not have very advanced...

  17. Mechatronics by bond graphs an object-oriented approach to modelling and simulation

    CERN Document Server

    Damić, Vjekoslav


    This book presents a computer-aided approach to the design of mechatronic systems. Its subject is an integrated modeling and simulation in a visual computer environment. Since the first edition, the simulation software changed enormously, became more user-friendly and easier to use. Therefore, a second edition became necessary taking these improvements into account. The modeling is based on system top-down and bottom-up approach. The mathematical models are generated in a form of differential-algebraic equations and solved using numerical and symbolic algebra methods. The integrated approach developed is applied to mechanical, electrical and control systems, multibody dynamics, and continuous systems. .

  18. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects (United States)

    Mizell, Carolyn; Malone, Linda


    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  19. Theoretical investigations of open-shell systems: 1. Spectral simulation of the 2s(2)p(2) (2)D solid molecular hydrogen (United States)

    Krumrine, Jennifer Rebecca

    This dissertation is concerned in part with the construction of accurate pairwise potentials, based on reliable ab initio potential energy surfaces (PES's), which are fully anisotropic in the sense that multiple PES's are accessible to systems with orientational electronic properties. We have carried out several investigations of B (2s 22p 2Po) with spherical ligands: (1)an investigation of the electronic spectrum of the BAr2 complex and (2)two related studies of the equilibrium properties and spectral simulation of B embedded in solid pH 2. Our investigations suggest that it cannot be assumed that nuclear motion in an open-shell system occurs on a single PES. The 2s2p2 2 D path integral molecular dynamics investigation of the equilibrium properties of boron trapped in solid para-hydrogen (pH2) and a path integral Monte Carlo spectral simulation. Using fully anisotropic pair potentials, coupling of the electronic and nuclear degrees of freedom is observed, and is found to be an essential feature in understanding the behavior and determining the energy of the impure solid, especially in highly anisotropic matrices. We employ the variational Monte Carlo method to further study the behavior of ground state B embedded in solid pH2. When a boron atom exists in a substitutional site in a lattice, the anisotropic distortion of the local lattice plays a minimal role in the energetics. However, when a nearest neighbor vacancy is present along with the boron impurity, two phenomena are found to influence the behavior of the impure quantum solid: (1)orientation of the 2p orbital to minimize the energy of the impurity and (2)distortion of the local lattice structure to promote an energetically favorable nuclear configuration. This research was supported by the Joint Program for Atomic, Molecular and Optical Science sponsored by the University of Maryland at College Park and the National Insititute of Standards and Technology, and by the U.S. Air Force Office of Scientific

  20. Comparison of two head-up displays in simulated standard and noise abatement night visual approaches (United States)

    Cronn, F.; Palmer, E. A., III


    Situation and command head-up displays were evaluated for both standard and two segment noise abatement night visual approaches in a fixed base simulation of a DC-8 transport aircraft. The situation display provided glide slope and pitch attitude information. The command display provided glide slope information and flight path commands to capture a 3 deg glide slope. Landing approaches were flown in both zero wind and wind shear conditions. For both standard and noise abatement approaches, the situation display provided greater glidepath accuracy in the initial phase of the landing approaches, whereas the command display was more effective in the final approach phase. Glidepath accuracy was greater for the standard approaches than for the noise abatement approaches in all phases of the landing approach. Most of the pilots preferred the command display and the standard approach. Substantial agreement was found between each pilot's judgment of his performance and his actual performance.

  1. A Simulation-Based Approach to Training Operational Cultural Competence (United States)

    Johnson, W. Lewis


    Cultural knowledge and skills are critically important for military operations, emergency response, or any job that involves interaction with a culturally diverse population. However, it is not obvious what cultural knowledge and skills need to be trained, and how to integrate that training with the other training that trainees must undergo. Cultural training needs to be broad enough to encompass both regional (culture-specific) and cross-cultural (culture-general) competencies, yet be focused enough to result in targeted improvements in on-the-job performance. This paper describes a comprehensive instructional development methodology and training technology framework that focuses cultural training on operational needs. It supports knowledge acquisition, skill acquisition, and skill transfer. It supports both training and assessment, and integrates with other aspects of operational skills training. Two training systems will be used to illustrate this approach: the Virtual Cultural Awareness Trainer (VCAT) and the Tactical Dari language and culture training system. The paper also discusses new and emerging capabilities that are integrating cultural competence training more strongly with other aspects of training and mission rehearsal.

  2. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach (United States)

    Chiadamrong, N.; Piyathanavong, V.


    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  3. Local Interaction Simulation Approach for Fault Detection in Medical Ultrasonic Transducers

    Directory of Open Access Journals (Sweden)

    Z. Hashemiyan


    Full Text Available A new approach is proposed for modelling medical ultrasonic transducers operating in air. The method is based on finite elements and the local interaction simulation approach. The latter leads to significant reductions of computational costs. Transmission and reception properties of the transducer are analysed using in-air reverberation patterns. The proposed approach can help to provide earlier detection of transducer faults and their identification, reducing the risk of misdiagnosis due to poor image quality.

  4. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying


    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  5. Simulation of a weather radar display for over-water airborne radar approaches (United States)

    Clary, G. R.


    Airborne radar approach (ARA) concepts are being investigated as a part of NASA's Rotorcraft All-Weather Operations Research Program on advanced guidance and navigation methods. This research is being conducted using both piloted simulations and flight test evaluations. For the piloted simulations, a mathematical model of the airborne radar was developed for over-water ARAs to offshore platforms. This simulated flight scenario requires radar simulation of point targets, such as oil rigs and ships, distributed sea clutter, and transponder beacon replies. Radar theory, weather radar characteristics, and empirical data derived from in-flight radar photographs are combined to model a civil weather/mapping radar typical of those used in offshore rotorcraft operations. The resulting radar simulation is realistic and provides the needed simulation capability for ongoing ARA research.

  6. Teaching medical students a clinical approach to altered mental status: simulation enhances traditional curriculum

    Directory of Open Access Journals (Sweden)

    Jeremy D. Sperling


    Full Text Available Introduction: Simulation-based medical education (SBME is increasingly being utilized for teaching clinical skills in undergraduate medical education. Studies have evaluated the impact of adding SBME to third- and fourth-year curriculum; however, very little research has assessed its efficacy for teaching clinical skills in pre-clerkship coursework. To measure the impact of a simulation exercise during a pre-clinical curriculum, a simulation session was added to a pre-clerkship course at our medical school where the clinical approach to altered mental status (AMS is traditionally taught using a lecture and an interactive case-based session in a small group format. The objective was to measure simulation's impact on students’ knowledge acquisition, comfort, and perceived competence with regards to the AMS patient. Methods: AMS simulation exercises were added to the lecture and small group case sessions in June 2010 and 2011. Simulation sessions consisted of two clinical cases using a high-fidelity full-body simulator followed by a faculty debriefing after each case. Student participation in a simulation session was voluntary. Students who did and did not participate in a simulation session completed a post-test to assess knowledge and a survey to understand comfort and perceived competence in their approach to AMS. Results: A total of 154 students completed the post-test and survey and 65 (42% attended a simulation session. Post-test scores were higher in students who attended a simulation session compared to those who did not (p<0.001. Students who participated in a simulation session were more comfortable in their overall approach to treating AMS patients (p=0.05. They were also more likely to state that they could articulate a differential diagnosis (p=0.03, know what initial diagnostic tests are needed (p=0.01, and understand what interventions are useful in the first few minutes (p=0.003. Students who participated in a simulation session

  7. Repeated scenario simulation to improve competency in critical care: a new approach for nursing education. (United States)

    Abe, Yukie; Kawahara, Chikako; Yamashina, Akira; Tsuboi, Ryoji


    In Japan, nursing education is being reformed to improve nurses' competency. Interest in use of simulation-based education to increase nurses' competency is increasing. To examine the effectiveness of simulation-based education in improving competency of cardiovascular critical care nurses. A training program that consisted of lectures, training in cardiovascular procedures, and scenario simulations was conducted with 24 Japanese nurses working at a university hospital. Participants were allocated to 4 groups, each of which visited 4 zones and underwent scenario simulations that included debriefings during and after the simulations. In each zone, the scenario simulation was repeated and participants assessed their own technical skills by scoring their performance on a rubric. Before and after the simulations, participants also completed a survey that used the Teamwork Activity Inventory in Nursing Scale (TAINS) to assess their nontechnical skills. All the groups showed increased rubric scores after the second simulation compared with the rubric scores obtained after the first simulation, despite differences in the order in which the scenarios were presented. Furthermore, the survey revealed significant increases in scores on the teamwork scale for the following subscale items: "Attitudes of the superior" (P Job satisfaction" (P = .01), and "Confidence as a team member" (P = .004). Our new educational approach of using repeated scenario simulations and TAINS seemed not only to enhance individual nurses' technical skills in critical care nursing but also to improve their nontechnical skills somewhat.

  8. Broadband ground motion simulation using a paralleled hybrid approach of Frequency Wavenumber and Finite Difference method (United States)

    Chen, M.; Wei, S.


    The serious damage of Mexico City caused by the 1985 Michoacan earthquake 400 km away indicates that urban areas may be affected by remote earthquakes. To asses earthquake risk of urban areas imposed by distant earthquakes, we developed a hybrid Frequency Wavenumber (FK) and Finite Difference (FD) code implemented with MPI, since the computation of seismic wave propagation from a distant earthquake using a single numerical method (e.g. Finite Difference, Finite Element or Spectral Element) is very expensive. In our approach, we compute the incident wave field (ud) at the boundaries of the excitation box, which surrounding the local structure, using a paralleled FK method (Zhu and Rivera, 2002), and compute the total wave field (u) within the excitation box using a parallelled 2D FD method. We apply perfectly matched layer (PML) absorbing condition to the diffracted wave field (u-ud). Compared to previous Generalized Ray Theory and Finite Difference (Wen and Helmberger, 1998), Frequency Wavenumber and Spectral Element (Tong et al., 2014), and Direct Solution Method and Spectral Element hybrid method (Monteiller et al., 2013), our absorbing boundary condition dramatically suppress the numerical noise. The MPI implementation of our method can greatly speed up the calculation. Besides, our hybrid method also has a potential use in high resolution array imaging similar to Tong et al. (2014).

  9. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail:; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)


    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.

  10. Surrogate model approach for improving the performance of reactive transport simulations (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris


    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  11. An Open Source-based Approach to the Development of Research Reactor Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Sung Moon; Suh, Yong Suk; Park, Cheol Park [KAERI, Daejeon (Korea, Republic of)


    In reactor design, operator training, safety analysis, or research using a reactor, it is essential to simulate time dependent reactor behaviors such as neutron population, fluid flow, and heat transfer. Furthermore, in order to use the simulator to train and educate operators, a mockup of the reactor user interface is required. There are commercial software tools available for reactor simulator development. However, it is costly to use those commercial software tools. Especially for research reactors, it is difficult to justify the high cost as regulations on research reactor simulators are not as strict as those for commercial Nuclear Power Plants(NPPs). An open source-based simulator for a research reactor is configured as a distributed control system based on EPICS framework. To demonstrate the use of the simulation framework proposed in this work, we consider a toy example. This example approximates a 1-second impulse reactivity insertion in a reactor, which represents the instantaneous removal and reinsertion of a control rod. The change in reactivity results in a slightly delayed change in power and corresponding increases in temperatures throughout the system. We proposed an approach for developing research reactor simulator using open source software tools, and showed preliminary results. The results demonstrate that the approach presented in this work can provide economical and viable way of developing research reactor simulators.

  12. An Open Source-based Approach to the Development of Research Reactor Simulator

    International Nuclear Information System (INIS)

    Joo, Sung Moon; Suh, Yong Suk; Park, Cheol Park


    In reactor design, operator training, safety analysis, or research using a reactor, it is essential to simulate time dependent reactor behaviors such as neutron population, fluid flow, and heat transfer. Furthermore, in order to use the simulator to train and educate operators, a mockup of the reactor user interface is required. There are commercial software tools available for reactor simulator development. However, it is costly to use those commercial software tools. Especially for research reactors, it is difficult to justify the high cost as regulations on research reactor simulators are not as strict as those for commercial Nuclear Power Plants(NPPs). An open source-based simulator for a research reactor is configured as a distributed control system based on EPICS framework. To demonstrate the use of the simulation framework proposed in this work, we consider a toy example. This example approximates a 1-second impulse reactivity insertion in a reactor, which represents the instantaneous removal and reinsertion of a control rod. The change in reactivity results in a slightly delayed change in power and corresponding increases in temperatures throughout the system. We proposed an approach for developing research reactor simulator using open source software tools, and showed preliminary results. The results demonstrate that the approach presented in this work can provide economical and viable way of developing research reactor simulators

  13. Evaluation of spurious results in the infrared measurement of CO2 isotope ratios due to spectral effects: a computer simulation study

    International Nuclear Information System (INIS)

    Mansfield, C.D.; Rutt, H.N.


    The application of infrared spectroscopy to the measurement of carbon isotope ratio breath tests is a promising alternative to conventional techniques, offering relative simplicity and lower costs. However, when designing such an instrument one should be conscious of several spectral effects that may be misinterpreted as changes in the isotope concentration and which therefore lead to spurious results. Through a series of computer simulations which model the behaviour of the CO 2 absorption spectrum, the risk these effects pose to reliable measurement of 13 CO 2 / 12 CO 2 ratios and the measures required to eliminate them are evaluated. The computer model provides a flexible high-resolution spectrum of the four main isotopomer fundamental transitions and fifteen of their most significant hotband transitions. It is demonstrated that the infrared source, infrared windows and breath sample itself all exhibit strong temperature-induced errors but pressure effects do not produce significant errors. We conclude that for reliable measurement of 13 CO 2 / 12 CO 2 ratios using infrared spectroscopy no pressure controls are required, window effects are eliminated using windows wedged at a minimum angle of 0.8-2.2 mrad, depending on the material, and the temperature sensitivity of source and gas cells necessitates stabilization to an accuracy of at least 0.2 K. (author)

  14. Simulation-based comparison of two approaches frequently used for dynamic contrast-enhanced MRI

    International Nuclear Information System (INIS)

    Zwick, Stefan; Brix, Gunnar; Tofts, Paul S.; Strecker, Ralph; Kopp-Schneider, Annette; Laue, Hendrik; Semmler, Wolfhard; Kiessling, Fabian


    The purpose was to compare two approaches for the acquisition and analysis of dynamic-contrast-enhanced MRI data with respect to differences in the modelling of the arterial input-function (AIF), the dependency of the model parameters on physiological parameters and their numerical stability. Eight hundred tissue concentration curves were simulated for different combinations of perfusion, permeability, interstitial volume and plasma volume based on two measured AIFs and analysed according to the two commonly used approaches. The transfer constants (Approach 1) K trans and (Approach 2) k ep were correlated with all tissue parameters. K trans showed a stronger dependency on perfusion, and k ep on permeability. The volume parameters (Approach 1) v e and (Approach 2) A were mainly influenced by the interstitial and plasma volume. Both approaches allow only rough characterisation of tissue microcirculation and microvasculature. Approach 2 seems to be somewhat more robust than 1, mainly due to the different methods of CA administration. (orig.)

  15. Mononuclear Pd(II) complex as a new therapeutic agent: Synthesis, characterization, biological activity, spectral and DNA binding approaches (United States)

    Saeidifar, Maryam; Mirzaei, Hamidreza; Ahmadi Nasab, Navid; Mansouri-Torshizi, Hassan


    The binding ability between a new water-soluble palladium(II) complex [Pd(bpy)(bez-dtc)]Cl (where bpy is 2,2‧-bipyridine and bez-dtc is benzyl dithiocarbamate), as an antitumor agent, and calf thymus DNA was evaluated using various physicochemical methods, such as UV-Vis absorption, Competitive fluorescence studies, viscosity measurement, zeta potential and circular dichroism (CD) spectroscopy. The Pd(II) complex was synthesized and characterized using elemental analysis, molar conductivity measurements, FT-IR, 1H NMR, 13C NMR and electronic spectra studies. The anticancer activity against HeLa cell lines demonstrated lower cytotoxicity than cisplatin. The binding constants and the thermodynamic parameters were determined at different temperatures (300 K, 310 K and 320 K) and shown that the complex can bind to DNA via electrostatic forces. Furthermore, this result was confirmed by the viscosity and zeta potential measurements. The CD spectral results demonstrated that the binding of Pd(II) complex to DNA induced conformational changes in DNA. We hope that these results will provide a basis for further studies and practical clinical use of anticancer drugs.

  16. Extraction and prediction of indices for monsoon intraseasonal oscillations: an approach based on nonlinear Laplacian spectral analysis (United States)

    Sabeerali, C. T.; Ajayamohan, R. S.; Giannakis, Dimitrios; Majda, Andrew J.


    An improved index for real-time monitoring and forecast verification of monsoon intraseasonal oscillations (MISOs) is introduced using the recently developed nonlinear Laplacian spectral analysis (NLSA) technique. Using NLSA, a hierarchy of Laplace-Beltrami (LB) eigenfunctions are extracted from unfiltered daily rainfall data from the Global Precipitation Climatology Project over the south Asian monsoon region. Two modes representing the full life cycle of the northeastward-propagating boreal summer MISO are identified from the hierarchy of LB eigenfunctions. These modes have a number of advantages over MISO modes extracted via extended empirical orthogonal function analysis including higher memory and predictability, stronger amplitude and higher fractional explained variance over the western Pacific, Western Ghats, and adjoining Arabian Sea regions, and more realistic representation of the regional heat sources over the Indian and Pacific Oceans. Real-time prediction of NLSA-derived MISO indices is demonstrated via extended-range hindcasts based on NCEP Coupled Forecast System version 2 operational output. It is shown that in these hindcasts the NLSA MISO indices remain predictable out to ˜3 weeks.

  17. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth


    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  18. An Approach for the Simulation of Ground and Honed Technical Surfaces for Training Classifiers

    Directory of Open Access Journals (Sweden)

    Sebastian Rief


    Full Text Available Training of neural networks requires large amounts of data. Simulated data sets can be helpful if the data required for the training is not available. However, the applicability of simulated data sets for training neuronal networks depends on the quality of the simulation model used. A simple and fast approach for the simulation of ground and honed surfaces with predefined properties is being presented. The approach is used to generate a diverse data set. This set is then applied to train a neural convolution network for surface type recognition. The resulting classifier is validated on the basis of a series of real measurement data and a classification rate of >85% is achieved. A possible field of application of the presented procedure is the support of measurement technicians in the standard-compliant evaluation of measurement data by suggestion of specific data processing steps, depending on the recognized type of manufacturing process.

  19. Simulation-Based Approach to Operating Costs Analysis of Freight Trucking

    Directory of Open Access Journals (Sweden)

    Ozernova Natalja


    Full Text Available The article is devoted to the problem of costs uncertainty in road freight transportation services. The article introduces the statistical approach, based on Monte Carlo simulation on spreadsheets, to the analysis of operating costs. The developed model gives an opportunity to estimate operating freight trucking costs under different configuration of cost factors. Important conclusions can be made after running simulations regarding sensitivity to different factors, optimal decisions and variability of operating costs.

  20. A Simulation-Based Optimization Approach for Integrated Port Resource Allocation Problem

    Directory of Open Access Journals (Sweden)

    Gholamreza Ilati


    Full Text Available Todays, due to the rapid increase in shipping volumes, the container terminals are faced with the challenge to cope with these increasing demands. To handle this challenge, it is crucial to use flexible and efficient optimization approach in order to decrease operating cost. In this paper, a simulation-based optimization approach is proposed to construct a near-optimal berth allocation plan integrated with a plan for tug assignment and for resolution of the quay crane re-allocation problem. The research challenges involve dealing with the uncertainty in arrival times of vessels as well as tidal variations. The effectiveness of the proposed evolutionary algorithm is tested on RAJAEE Port as a real case. According to the simulation result, it can be concluded that the objective function value is affected significantly by the arrival disruptions. The result also demonstrates the effectiveness of the proposed simulation-based optimization approach.

  1. A Simulation-and-Regression Approach for Stochastic Dynamic Programs with Endogenous State Variables

    DEFF Research Database (Denmark)

    Denault, Michel; Simonato, Jean-Guy; Stentoft, Lars


    We investigate the optimum control of a stochastic system, in the presence of both exogenous (control-independent) stochastic state variables and endogenous (control-dependent) state variables. Our solution approach relies on simulations and regressions with respect to the state variables, but also...... grafts the endogenous state variable into the simulation paths. That is, unlike most other simulation approaches found in the literature, no discretization of the endogenous variable is required. The approach is meant to handle several stochastic variables, offers a high level of flexibility...... in their modeling, and should be at its best in non time-homogenous cases, when the optimal policy structure changes with time. We provide numerical results for a dam-based hydropower application, where the exogenous variable is the stochastic spot price of power, and the endogenous variable is the water level...

  2. Sustainable Strategies for Transportation Development in Emerging Cities in China: A Simulation Approach

    Directory of Open Access Journals (Sweden)

    Liyin Shen


    Full Text Available With the rapid development of emerging cities in China, policy makers are faced with the challenges involved in devising strategies for providing transportation systems to keep pace with development. These challenges are associated with the interactive effects among a number of sophisticated factors involved in transportation systems. This paper presents a system dynamics simulation approach to analyze and select transportation development strategies in order to achieve good sustainability performance once they are implemented. The simulation approach consists of three modules: a socio-economic module, a demand module, and a supply module. The approach is validated through applying empirical data collected from the Shenzhen statistical bulletins. Three types of transport development strategies are selected for the city and examined for their applicability and effects through simulation. The strategies are helpful for reducing decision-making mistakes and achieving the goal of sustainable urban development in most emerging cities.

  3. A dental public health approach based on computational mathematics: Monte Carlo simulation of childhood dental decay. (United States)

    Tennant, Marc; Kruger, Estie


    This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.

  4. A multi-sensor lidar, multi-spectral and multi-angular approach for mapping canopy height in boreal forest regions (United States)

    Selkowitz, David J.; Green, Gordon; Peterson, Birgit E.; Wylie, Bruce


    Spatially explicit representations of vegetation canopy height over large regions are necessary for a wide variety of inventory, monitoring, and modeling activities. Although airborne lidar data has been successfully used to develop vegetation canopy height maps in many regions, for vast, sparsely populated regions such as the boreal forest biome, airborne lidar is not widely available. An alternative approach to canopy height mapping in areas where airborne lidar data is limited is to use spaceborne lidar measurements in combination with multi-angular and multi-spectral remote sensing data to produce comprehensive canopy height maps for the entire region. This study uses spaceborne lidar data from the Geosciences Laser Altimeter System (GLAS) as training data for regression tree models that incorporate multi-angular and multi-spectral data from the Multi-Angle Imaging Spectroradiometer (MISR) and the Moderate Resolution Imaging SpectroRadiometer (MODIS) to map vegetation canopy height across a 1,300,000 km2 swath of boreal forest in Interior Alaska. Results are compared to in situ height measurements as well as airborne lidar data. Although many of the GLAS-derived canopy height estimates are inaccurate, applying a series of filters incorporating both data associated with the GLAS shots as well as ancillary data such as land cover can identify the majority of height estimates with significant errors, resulting in a filtered dataset with much higher accuracy. Results from the regression tree models indicate that late winter MISR imagery acquired under snow-covered conditions is effective for mapping canopy heights ranging from 5 to 15 m, which includes the vast majority of forests in the region. It appears that neither MISR nor MODIS imagery acquired during the growing season is effective for canopy height mapping, although including summer multi-spectral MODIS data along with winter MISR imagery does appear to provide a slight increase in the accuracy of

  5. Microscopic approach of the spectral property of 1+ and high-spin states in 124Te nucleus

    International Nuclear Information System (INIS)

    Shi Zhuyi; Ni Shaoyong; Tong Hong; Zhao Xingzhi


    Using a microscopic sdIBM-2+2q·p· approach, the spectra of the low-spin and partial high-spin states in 124 Te nucleus are relatively successfully calculated. In particular, the 1 1 + , 1 2 + , 3 1 + , 3 2 + and 5 1 + states are successfully reproduced, the energy relationship resulting from this approach identifies that the 6 1 + , 8 1 + and 10 1 + states belong to the aligned states of the two protons. This can explain the recent experimental results that the collective structures may coexist with the single-particle states. So this approach becomes a powerful tool for successfully describing the spectra of general nuclei without clear symmetry and of isotopes located at transitional regions. Finally, the aligned-state structure and the broken-pair energy of the two-quasi-particle are discussed

  6. Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol. (United States)

    Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude


    Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new

  7. Investigation of spectral distribution and variation of irradiance with the passage time of CSI lamps which constitute a solar simulator; Solar simulator ni shiyosuru CSI lamp no supekutoru bunpu, hosha shodo no keiji henka ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    Sugiyama, T.; Yamada, T.; Noguchi, T. [Japan Quality Assurance Organization, Tokyo (Japan)


    Study was made on time-variation of the performance of CSI lamps for solar simulators. In order to accurately evaluate the standard heat collection performance of solar systems in a room, MITI installed an artificial solar light source in the Solar Techno-Center of Japan Quality Assurance Organization for trial use and evaluation. CSI lamp is superior in durability, and can simulate the solar light in the daytime. The light source is composed of 72 metal halide lamps of 1kW arranged in a plane of 3.5times3.5m. The study result on time-variation of a spectral distribution and irradiance by intermittent switching of lamps showed a sufficient durability of 2000h. To ensure the accuracy of a solar heat collector measurement system enough, periodic calibration is being carried out using reference goods. To ensure the reliability and stability for a switching system, periodic maintenance of a power source, stabilizer and electric system is also being carried out in addition to CSI lamps. The stable irradiance and accuracy are being kept by such maintenance and periodic exchange of lamps. 6 figs., 4 tabs.

  8. Approaches to the simulation of unconfined flow and perched groundwater flow in MODFLOW (United States)

    Bedekar, Vivek; Niswonger, Richard G.; Kipp, Kenneth; Panday, Sorab; Tonkin, Matthew


    Various approaches have been proposed to manage the nonlinearities associated with the unconfined flow equation and to simulate perched groundwater conditions using the MODFLOW family of codes. The approaches comprise a variety of numerical techniques to prevent dry cells from becoming inactive and to achieve a stable solution focused on formulations of the unconfined, partially-saturated, groundwater flow equation. Keeping dry cells active avoids a discontinuous head solution which in turn improves the effectiveness of parameter estimation software that relies on continuous derivatives. Most approaches implement an upstream weighting of intercell conductance and Newton-Raphson linearization to obtain robust convergence. In this study, several published approaches were implemented in a stepwise manner into MODFLOW for comparative analysis. First, a comparative analysis of the methods is presented using synthetic examples that create convergence issues or difficulty in handling perched conditions with the more common dry-cell simulation capabilities of MODFLOW. Next, a field-scale three-dimensional simulation is presented to examine the stability and performance of the discussed approaches in larger, practical, simulation settings.

  9. Simulation-Based Approach for Studying the Balancing of Local Smart Grids with Electric Vehicle Batteries

    Directory of Open Access Journals (Sweden)

    Juhani Latvakoski


    Full Text Available Modern society is facing great challenges due to pollution and increased carbon dioxide (CO2 emissions. As part of solving these challenges, the use of renewable energy sources and electric vehicles (EVs is rapidly increasing. However, increased dynamics have triggered problems in balancing energy supply and consumption demand in the power systems. The resulting uncertainty and unpredictability of energy production, consumption, and management of peak loads has caused an increase in costs for energy market actors. Therefore, the means for studying the balancing of local smart grids with EVs is a starting point for this paper. The main contribution is a simulation-based approach which was developed to enable the study of the balancing of local distribution grids with EV batteries in a cost-efficient manner. The simulation-based approach is applied to enable the execution of a distributed system with the simulation of a local distribution grid, including a number of charging stations and EVs. A simulation system has been constructed to support the simulation-based approach. The evaluation has been carried out by executing the scenario related to balancing local distribution grids with EV batteries in a step-by-step manner. The evaluation results indicate that the simulation-based approach is able to facilitate the evaluation of smart grid– and EV-related communication protocols, control algorithms for charging, and functionalities of local distribution grids as part of a complex, critical cyber-physical system. In addition, the simulation system is able to incorporate advanced methods for monitoring, controlling, tracking, and modeling behavior. The simulation model of the local distribution grid can be executed with the smart control of charging and discharging powers of the EVs according to the load situation in the local distribution grid. The resulting simulation system can be applied to the study of balancing local smart grids with EV

  10. Improving simulation of soil moisture in China using a multiple meteorological forcing ensemble approach

    Directory of Open Access Journals (Sweden)

    J.-G. Liu


    Full Text Available The quality of soil-moisture simulation using land surface models depends largely on the accuracy of the meteorological forcing data. We investigated how to reduce the uncertainty arising from meteorological forcings in a simulation by adopting a multiple meteorological forcing ensemble approach. Simulations by the Community Land Model version 3.5 (CLM3.5 over mainland China were conducted using four different meteorological forcings, and the four sets of soil-moisture data related to the simulations were then merged using simple arithmetical averaging and Bayesian model averaging (BMA ensemble approaches. BMA is a statistical post-processing procedure for producing calibrated and sharp predictive probability density functions (PDFs, which is a weighted average of PDFs centered on the bias-corrected forecasts from a set of individual ensemble members based on their probabilistic likelihood measures. Compared to in situ observations, the four simulations captured the spatial and seasonal variations of soil moisture in most cases with some mean bias. They performed differently when simulating the seasonal phases in the annual cycle, the interannual variation and the magnitude of observed soil moisture over different subregions of mainland China, but no individual meteorological forcing performed best for all subregions. The simple arithmetical average ensemble product outperformed most, but not all, individual members over most of the subregions. The BMA ensemble product performed better than simple arithmetical averaging, and performed best for all fields over most of the subregions. The BMA ensemble approach applied to the ensemble simulation reproduced anomalies and seasonal variations in observed soil-moisture values, and simulated the mean soil moisture. It is presented here as a promising way for reproducing long-term, high-resolution spatial and temporal soil-moisture data.

  11. Learning nursing through simulation: A case study approach towards an expansive model of learning. (United States)

    Berragan, Liz


    This study explores the impact of simulation upon learning for undergraduate nursing students. The study objectives were (a) to explore the experiences of participating in simulation education for a small group of student nurses; and (b) to explore learning through simulation from the perspectives of the nursing students, the nurse educators and the nurse mentors. Conducted as a small-scale narrative case study, it tells the unique stories of a small number of undergraduate nursing students, nurse mentors and nurse educators and explores their experiences of learning through simulation. Data analysis through progressive focusing revealed that the nurse educators viewed simulation as a means of helping students to learn to be nurses, whilst, the nurse mentors suggested that simulation helped them to determine nursing potential. The students' narratives showed that they approached simulation learning in different ways resulting in a range of outcomes: those who were successfully becoming nurses, those who were struggling or working hard to become nurses and those who were not becoming nurses. Theories of professional practice learning and activity theory present an opportunity to articulate and theorise the learning inherent in simulation activities. They recognise the links between learning and the environment of work and highlight the possibilities for learning to inspire change and innovation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Improving discrimination of savanna tree species through a multiple endmember spectral-angle-mapper (SAM) approach: canopy level analysis

    CSIR Research Space (South Africa)

    Cho, Moses A


    Full Text Available that the lowest performance for discriminating rainforest species compared to linear discriminant analysis and maximum likelihood classifiers [10]. Our research hypothesis therefore centres on the fact that a multiple-endmember approach, involving many..., and R. F. Hughes, "Invasive species detection in Hawaiian rainforests using airborne imaging spectroscopy and LiDAR," Remote Sensing of Environment, vol. 112, pp. 1942-1955, 2008. [21] P. J. Curran, "Remote sensing of foliar chemistry," Remote Sensing...

  13. Towards oscillations-based simulation of social systems: a neurodynamic approach (United States)

    Plikynas, Darius; Basinskas, Gytis; Laukaitis, Algirdas


    This multidisciplinary work presents synopsis of theories in the search for common field-like fundamental principles of self-organisation and communication existing on quantum, cellular, and even social levels. Based on these fundamental principles, we formulate conceptually novel social neuroscience paradigm (OSIMAS), which envisages social systems emerging from the coherent neurodynamical processes taking place in the individual mind-fields. In this way, societies are understood as global processes emerging from the superposition of the conscious and subconscious mind-fields of individual members of society. For the experimental validation of the biologically inspired OSIMAS paradigm, we have designed a framework of EEG-based experiments. Initial baseline individual tests of spectral cross-correlations of EEG-recorded brainwave patterns for some mental states have been provided in this paper. Preliminary experimental results do not refute the main OSIMAS postulates. This paper also provides some insights for the construction of OSIMAS-based simulation models.

  14. An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN

    Directory of Open Access Journals (Sweden)

    Mario G.C.A. Cimino


    Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.

  15. Spectral clustering for water body spectral types analysis (United States)

    Huang, Leping; Li, Shijin; Wang, Lingli; Chen, Deqing


    In order to study the spectral types of water body in the whole country, the key issue of reservoir research is to obtain and to analyze the information of water body in the reservoir quantitatively and accurately. A new type of weight matrix is constructed by utilizing the spectral features and spatial features of the spectra from GF-1 remote sensing images comprehensively. Then an improved spectral clustering algorithm is proposed based on this weight matrix to cluster representative reservoirs in China. According to the internal clustering validity index which called Davies-Bouldin(DB) index, the best clustering number 7 is obtained. Compared with two clustering algorithms, the spectral clustering algorithm based only on spectral features and the K-means algorithm based on spectral features and spatial features, simulation results demonstrate that the proposed spectral clustering algorithm based on spectral features and spatial features has a higher clustering accuracy, which can better reflect the spatial clustering characteristics of representative reservoirs in various provinces in China - similar spectral properties and adjacent geographical locations.

  16. Toward a Common Structure in Demographic Educational Modeling and Simulation: A Complex Systems Approach (United States)

    Guevara, Porfirio


    This article identifies elements and connections that seem to be relevant to explain persistent aggregate behavioral patterns in educational systems when using complex dynamical systems modeling and simulation approaches. Several studies have shown what factors are at play in educational fields, but confusion still remains about the underlying…

  17. Teaching Business Process Management with Simulation in Graduate Business Programs: An Integrative Approach (United States)

    Saraswat, Satya Prakash; Anderson, Dennis M.; Chircu, Alina M.


    This paper describes the development and evaluation of a graduate level Business Process Management (BPM) course with process modeling and simulation as its integral component, being offered at an accredited business university in the Northeastern U.S. Our approach is similar to that found in other Information Systems (IS) education papers, and…

  18. A new approach in the numerical simulation for the blood flow in large vessels

    Directory of Open Access Journals (Sweden)

    Balazs ALBERT


    Full Text Available In this paper we are proposing a new approach in the numerical simulation of the bloodflow in large vessels. The initial conditions are set to be compatible with the non-Newtonian modelused. Numerical experiments in stenosed artery and in artery with aneurysm (using COMSOL 3.3,are presented.

  19. Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach. (United States)

    Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Jonathan Dushoff; Liu, James H


    Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017. Published by Elsevier Inc.

  20. BlueSky ATC Simulator Project : An Open Data and Open Source Approach

    NARCIS (Netherlands)

    Hoekstra, J.M.; Ellerbroek, J.


    To advance ATM research as a science, ATM research results should be made more comparable. A possible way to do this is to share tools and data. This paper presents a project that investigates the feasibility of a fully open-source and open-data approach to air traffic simulation. Here, the first of

  1. Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach

    Directory of Open Access Journals (Sweden)

    C. Nabert


    Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.

  2. Analysis of opioid consumption in clinical trials: a simulation based analysis of power of four approaches

    DEFF Research Database (Denmark)

    Juul, Rasmus Vestergaard; Nyberg, Joakim; Kreilgaard, Mads


    Inconsistent trial design and analysis is a key reason that few advances in postoperative pain management have been made from clinical trials analyzing opioid consumption data. This study aimed to compare four different approaches to analyze opioid consumption data. A repeated time-to-event (RTTE......) model in NONMEM was used to simulate clinical trials of morphine consumption with and without a hypothetical adjuvant analgesic in doses equivalent to 15-62% reduction in morphine consumption. Trials were simulated with duration of 24-96 h. Monte Carlo simulation and re-estimation were performed...... of potency was obtained with a RTTE model accounting for both morphine effects and time-varying covariates on opioid consumption. An RTTE analysis approach proved better suited for demonstrating efficacy of opioid sparing analgesics than traditional statistical tests as a lower sample size was required due...

  3. Large eddy simulation of atmospheric boundary layer over wind farms using a prescribed boundary layer approach

    DEFF Research Database (Denmark)

    Chivaee, Hamid Sarlak; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming


    Large eddy simulation (LES) of flow in a wind farm is studied in neutral as well as thermally stratified atmospheric boundary layer (ABL). An approach has been practiced to simulate the flow in a fully developed wind farm boundary layer. The approach is based on the Immersed Boundary Method (IBM......) and involves implementation of an arbitrary prescribed initial boundary layer (See [1]). A prescribed initial boundary layer profile is enforced through the computational domain using body forces to maintain a desired flow field. The body forces are then stored and applied on the domain through the simulation...... and the boundary layer shape will be modified due to the interaction of the turbine wakes and buoyancy contributions. The implemented method is capable of capturing the most important features of wakes of wind farms [1] while having the advantage of resolving the wall layer with a coarser grid than typically...

  4. A new approach to modeling of selected human respiratory system diseases, directed to computer simulations. (United States)

    Redlarski, Grzegorz; Jaworski, Jacek


    This paper presents a new versatile approach to model severe human respiratory diseases via computer simulation. The proposed approach enables one to predict the time histories of various diseases via information accessible in medical publications. This knowledge is useful to bioengineers involved in the design and construction of medical devices that are employed for monitoring of respiratory condition. The approach provides the data that are crucial for testing diagnostic systems. This can be achieved without the necessity of probing the physiological details of the respiratory system as well as without identification of parameters that are based on measurement data. © 2013 Elsevier Ltd. All rights reserved.

  5. A Participatory Design Approach to Develop an Interactive Sound Environment Simulator. (United States)

    Hanssen, Geir K; Dahl, Yngve


    Our purpose is to provide insight into the added value of applying a participatory design approach in the design of an interactive sound environment simulator to facilitate communication and understanding between patients and audiologists in consultation situations. We have applied a qualitative approach, presenting results and discussion in the form of a story, following 3 consecutive steps: problem investigation, design, and evaluation. We provide an overview of lessons learned, emphasizing how patients and audiologists took roles and responsibilities in the design process and the effects of this involvement. Our results suggest that participatory design is a viable and practical approach to address multifaceted problems directly affecting patients and practitioners.

  6. Spectral Pollution


    Davies, E B; Plum, M


    We discuss the problems arising when computing eigenvalues of self-adjoint operators which lie in a gap between two parts of the essential spectrum. Spectral pollution, i.e. the apparent existence of eigenvalues in numerical computations, when no such eigenvalues actually exist, is commonplace in problems arising in applied mathematics. We describe a geometrically inspired method which avoids this difficulty, and show that it yields the same results as an algorithm of Zimmermann and Mertins.

  7. A Hybrid Approach to Simulate X-Ray Imaging Techniques, Combining Monte Carlo and Deterministic Algorithms (United States)

    Freud, N.; Letang, J.-M.; Babot, D.


    In this paper, we propose a hybrid approach to simulate multiple scattering of photons in objects under X-ray inspection, without recourse to parallel computing and without any approximation sacrificing accuracy. Photon scattering is considered from two points of view: it contributes to X-ray imaging and to the dose absorbed by the patient. The proposed hybrid approach consists of a Monte Carlo stage followed by a deterministic phase, thus taking advantage of the complementarity between these two methods. In the first stage, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Then this set of scattering events is used to compute the energy imparted to the detector, with a deterministic algorithm based on a "forced detection" scheme. Regarding dose evaluation, we propose to assess separately the energy deposited by direct radiation (using a deterministic algorithm) and by scattered radiation (using our hybrid approach). The results obtained in a test case are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the required detector resolution and statistics). It is possible to simulate radiographic images virtually free from photon noise. In the case of dose evaluation, the hybrid approach appears particularly suitable to calculate the dose absorbed by regions of interest (rather than the entire irradiated organ) with computation time and statistical fluctuations considerably reduced in comparison with conventional Monte Carlo simulation.

  8. Simulation in Quality Management – An Approach to Improve Inspection Planning

    Directory of Open Access Journals (Sweden)

    H.-A. Crostack


    Full Text Available Production is a multi-step process involving many different articles produced in different jobs by various machining stations. Quality inspection has to be integrated in the production sequence in order to ensure the conformance of the products. The interactions between manufacturing processes and inspections are very complex since three aspects (quality, cost, and time should all be considered at the same time while determining the suitable inspection strategy. Therefore, a simulation approach was introduced to solve this problem.The simulator called QUINTE [the QUINTE simulator has been developed at the University of Dortmund in the course of two research projects funded by the German Federal Ministry of Economics and Labour (BMWA: Bundesministerium für Wirtschaft und Arbeit, the Arbeitsgemeinschaft industrieller Forschungsvereinigungen (AiF, Cologne/Germany and the Forschungsgemeinschaft Qualität, Frankfurt a.M./Germany] was developed to simulate the machining as well as the inspection. It can be used to investigate and evaluate the inspection strategies in manufacturing processes. The investigation into the application of QUINTE simulator in industry was carried out at two pilot companies. The results show the validity of this simulator. An attempt to run QUINTE in a user-friendly environment, i.e., the commercial simulation software – Arena® is also described in this paper.NOTATION: QUINTE Qualität in der Teilefertigung  (Quality in  the manufacturing process  

  9. Colonoscopy procedure simulation: virtual reality training based on a real time computational approach. (United States)

    Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang


    Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.

  10. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach (United States)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan


    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space

  11. A simulation approach for scheduling patients in the department of radiation oncology. (United States)

    Ogulata, S Noyan; Cetik, M Oya; Koyuncu, Esra; Koyuncu, Melik


    Physical therapy, hemodialysis and radiation oncology departments in which patients go through lengthy and periodic treatments need to utilize their limited and expensive equipment and human resources efficiently. In such departments, it is an important task to continue to treat current patients without any interruption along with incoming patients. In this study, a patient scheduling approach for a university radiation oncology department is introduced to minimize delays in treatments due to potential prolongations in treatments of current patients and to maintain efficient use of the daily treatment capacity. A simulation analysis of the scheduling approach is also conducted to assess its efficiency under different environmental conditions and to determine appropriate scheduling policy parameter values. Also, the simulation analysis of the suggested scheduling approach enables to determine appropriate scheduling parameters under given circumstances. Therefore, the system can perform more efficiently.


    Directory of Open Access Journals (Sweden)



    Full Text Available Under high uncertainty and risky environments, the future estimations related to project proposalscannot be certain and really materialized values. It is inevitable that there exists a deviation or gap betweenforecasted values and actual values. Thus, project risk level of the proposal should be analyzedin the assessment phase. Simulation based project evaluation approaches enables to make more reliableinvestment decision since they permits including future uncertainty and risk in analyze process. Inaddition, many times, project proposals are evaluated with more than one conflicted criteria. The aimof this paper is to present a new approach that accounts for multiple objectives for evaluating riskyinvestment projects and determining projects risk level. With the proposed simulation based optimizationapproach, necessity values for project parameters are determined to reach the expected profitabilityof the investment with the minimum initial investment cost. Also, there is an illustrative examplegiven in this study as an application of the proposed approach.

  13. Flight management research utilizing an oculometer. [pilot scanning behavior during simulated approach and landing (United States)

    Spady, A. A., Jr.; Kurbjun, M. C.


    This paper presents an overview of the flight management work being conducted using NASA Langley's oculometer system. Tests have been conducted in a Boeing 737 simulator to investigate pilot scan behavior during approach and landing for simulated IFR, VFR, motion versus no motion, standard versus advanced displays, and as a function of various runway patterns and symbology. Results of each of these studies are discussed. For example, results indicate that for the IFR approaches a difference in pilot scan strategy was noted for the manual versus coupled (autopilot) conditions. Also, during the final part of the approach when the pilot looks out-of-the-window he fixates on his aim or impact point on the runway and holds this point until flare initiation.

  14. The simulation of two-dimensional migration patterns - a novel approach

    International Nuclear Information System (INIS)

    Villar, Heldio Pereira


    A novel approach to the problem of simulation of two-dimensional migration of solutes in saturated soils is presented. In this approach, the two-dimensional advection-dispersion equation is solved by finite-differences in a stepwise fashion, by employing the one-dimensional solution first in the direction of flow and then perpendicularly, using the same time increment in both cases. As the results of this numerical model were to be verified against experimental results obtained by radioactive tracer experiments, an attenuation factor, to account for the contribution of the gamma rays emitted by the whole plume of tracer to the readings of the adopted radiation detectors, was introduced into the model. The comparison between experimental and simulated concentration contours showed good agreement, thus establishing the feasibility of the approach proposed herein. (author)

  15. Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction

    Energy Technology Data Exchange (ETDEWEB)

    Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.; /Chicago U., EFI; Alvarez Gonzalez, B.; /Cantabria Inst. of Phys.; Amerio, S.; /INFN, Padua; Amidei, D.; /Michigan U.; Anastassov, A.; /Northwestern U.; Annovi, A.; /Frascati; Antos, J.; /Comenius U.; Apollinari, G.; /Fermilab; Appel, J.; /Fermilab; Apresyan, A.; /Purdue U. /Waseda U.


    The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.

  16. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach (United States)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.


    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  17. An Electric taxi fleet charging system using second life electric car batteries simulation and economical approach


    Canals Casals, Lluc; Amante García, Beatriz


    The industrial car manufacturers see in the high battery price an im-portant obstacle for an electric vehicle mass selling, thus mass production. There-fore, in order to find some cost relieves and better selling opportunities, they look and push forward to find profitable second battery uses. This study presents a sim-ulation and an economical approach for an electric taxi fleet charging system, us-ing these “old” electric car batteries, implemented in the city of Barcelona. The simulation w...

  18. A unified approach to building accelerator simulation software for the SSC

    International Nuclear Information System (INIS)

    Paxson, V.; Aragon, C.; Peggs, S.; Saltmarsh, C.; Schachinger, L.


    To adequately simulate the physics and control of a complex accelerator requires a substantial number of programs which must present a uniform interface to both the user and the internal representation of the accelerator. If these programs are to be truly modular, so that their use can be orchestrated as needed, the specification of both their graphical and data interfaces must be carefully designed. We describe the state of such SSC simulation software, with emphasis on addressing these uniform interface needs by using a standardized data set format and object-oriented approaches to graphics and modeling. 12 refs

  19. DSMC Simulation of Entry Vehicle Flowfields Using a Collision-Based Chemical Kinetics Approach (United States)

    Wilmoth, R. G.; VanGilder, D. B.; Papp, J. L.


    A study of high-altitude, nonequilibrium flows about an Orion Command Module (CM) is conducted using the collision-based chemical kinetics approach introduced by Bird in 2008. DSMC simulations are performed for Earth entry flow conditions and show significant differences in molecular dissociation in the shock layer from those obtained using traditional temperature-based procedures with an attendant reduction in the surface heat flux. Reaction rates derived from equilibrium simulations are also presented for selected reactions relevant to entry flow kinetics, and comparisons to various experimental and theoretical results are presented.

  20. Feasibility of non-linear simulation for Field II using an angular spectrum approach

    DEFF Research Database (Denmark)

    Du, Yigang; Jensen, Jørgen Arendt


    Simulation of non-linear fields is most often restricted to single element, circularly symmetric sources, which is not used in clinical scanning. To obtain a general and valuable simulation, array transducers of any geometry with any excitation, focusing, and apodization should be modeled. Field II...... to the transducer surface. This calculation is performed using Field II and, thus, includes modeling array transducers of any geometry with any excitation, focusing, and apodization. The propagation in the linear or non-linear medium is then performed using the angular spectrum approach. The first step in deriving...

  1. A kinematic approach for efficient and robust simulation of the cardiac beating motion.

    Directory of Open Access Journals (Sweden)

    Takashi Ijiri

    Full Text Available Computer simulation techniques for cardiac beating motions potentially have many applications and a broad audience. However, most existing methods require enormous computational costs and often show unstable behavior for extreme parameter sets, which interrupts smooth simulation study and make it difficult to apply them to interactive applications. To address this issue, we present an efficient and robust framework for simulating the cardiac beating motion. The global cardiac motion is generated by the accumulation of local myocardial fiber contractions. We compute such local-to-global deformations using a kinematic approach; we divide a heart mesh model into overlapping local regions, contract them independently according to fiber orientation, and compute a global shape that satisfies contracted shapes of all local regions as much as possible. A comparison between our method and a physics-based method showed that our method can generate motion very close to that of a physics-based simulation. Our kinematic method has high controllability; the simulated ventricle-wall-contraction speed can be easily adjusted to that of a real heart by controlling local contraction timing. We demonstrate that our method achieves a highly realistic beating motion of a whole heart in real time on a consumer-level computer. Our method provides an important step to bridge a gap between cardiac simulations and interactive applications.

  2. An applied artificial intelligence approach towards assessing building performance simulation tools

    Energy Technology Data Exchange (ETDEWEB)

    Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)


    With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)

  3. Influence of rainfall spatial variability on rainfall-runoff modelling: Benefit of a simulation approach? (United States)

    Emmanuel, I.; Andrieu, H.; Leblois, E.; Janey, N.; Payrastre, O.


    No consensus has yet been reached regarding the influence of rainfall spatial variability on runoff modelling at catchment outlets. To eliminate modelling and measurement errors, in addition to controlling rainfall variability and both the characteristics and hydrological behaviour of catchments, we propose to proceed by simulation. We have developed a simulation chain that combines a stream network model, a rainfall simulator and a distributed hydrological model (with four production functions and a distributed transfer function). Our objective here is to use this simulation chain as a simplified test bed in order to better understand the impact of the spatial variability of rainfall forcing. We applied the chain to contrasted situations involving catchments ranging from a few tens to several hundreds of square km2, thus corresponding to urban and peri-urban catchments for which surface runoff constitutes the dominant process. The results obtained confirm that the proposed simulation approach is helpful to better understand the influence of rainfall spatial variability on the catchment response. We have shown that significant dispersion exists not only between the various simulation scenarios (defined by a rainfall configuration and a catchment configuration), but also within each simulation scenario. These results show that the organisation of rainfall during the study event over the study catchment plays an important role, leading us to examine rainfall variability indexes capable of summarising the influence of rainfall spatial organisation on the catchment response. Thanks to the simulation chain, we have tested the variability indexes of Zoccatelli et al. (2010) and improved them by proposing two other indexes.

  4. Teaching communication and therapeutic relationship skills to baccalaureate nursing students: a peer mentorship simulation approach. (United States)

    Miles, Leslie W; Mabey, Linda; Leggett, Sarah; Stansfield, Katie


    The literature on techniques for improving student competency in therapeutic communication and interpersonal skills is limited. A simulation approach to enhance the learning of communication skills was developed to address these issues. Second-semester and senior nursing students participated in videorecorded standardized patient simulations, with senior students portraying the patient. Following simulated interactions, senior students provided feedback to junior students on their use of communication skills and other therapeutic factors. To integrate the learning experience, junior students completed a written assignment, in which they identified effective and noneffective communication; personal strengths and weaknesses; and use of genuineness, empathy, and positive regard. A videorecording of each student interaction gave faculty the opportunity to provide formative feedback to students. Student evaluations have been positive. Themes identified in student evaluations include the impact of seeing oneself, significance of practicing, getting below the surface in communication, and moving from insight to goal setting. Copyright 2014, SLACK Incorporated.

  5. Fast simulation of non-linear pulsed ultrasound fields using an angular spectrum approach

    DEFF Research Database (Denmark)

    Du, Yigang; Jensen, Jørgen Arendt


    A fast non-linear pulsed ultrasound field simulation is presented. It is implemented based on an angular spectrum approach (ASA), which analytically solves the non-linear wave equation. The ASA solution to the Westervelt equation is derived in detail. The calculation speed is significantly...... increased compared to a numerical solution using an operator splitting method (OSM). The ASA has been modified and extended to pulsed non-linear ultrasound fields in combination with Field II, where any array transducer with arbitrary geometry, excitation, focusing and apodization can be simulated...... with a center frequency of 5 MHz. The speed is increased approximately by a factor of 140 and the calculation time is 12 min with a standard PC, when simulating the second harmonic pulse at the focal point. For the second harmonic point spread function the full width error is 1.5% at 6 dB and 6.4% at 12 d...

  6. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.


    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  7. Study on numerical analysis and experiment simulation approaches for radiation effects of typical optoelectronic devices

    International Nuclear Information System (INIS)

    Tang Benqi; Zhang Yong; Xiao Zhigang; Huang Fang; Wang Zujun; Huang Shaoyan; Mao Yongze; Wang Feng


    The numerical analysis and experimental simulation approaches were studied for radiation effects of typical optoelectronic devices, such as Si solar cells and CCDs. At first, the damage mechanism of ionization and displacement effects on solar cells and CCDs was analyzed. Secondly, the output characteristics of Si solar cell by 1 MeV electron radiation was calculated with the two-dimensional device simulation software MEDICI, such as the short circuit current I sc , the open-circuit voltage V oc and the maximum power P max . The simulation results are in good agreement with the experimental values in a certain range of electron fluence. Meanwhile, the ionization radiation experiment was carried out on the commercial linear CCD by 60 Co γ source with our self-designed test system, and some valuable results of dark voltage and saturation voltage varied with total dose for TCD132D were gotten. (author)

  8. Incorporating extrinsic noise into the stochastic simulation of biochemical reactions: A comparison of approaches (United States)

    Thanh, Vo Hong; Marchetti, Luca; Reali, Federico; Priami, Corrado


    The stochastic simulation algorithm (SSA) has been widely used for simulating biochemical reaction networks. SSA is able to capture the inherently intrinsic noise of the biological system, which is due to the discreteness of species population and to the randomness of their reciprocal interactions. However, SSA does not consider other sources of heterogeneity in biochemical reaction systems, which are referred to as extrinsic noise. Here, we extend two simulation approaches, namely, the integration-based method and the rejection-based method, to take extrinsic noise into account by allowing the reaction propensities to vary in time and state dependent manner. For both methods, new efficient implementations are introduced and their efficiency and applicability to biological models are investigated. Our numerical results suggest that the rejection-based method performs better than the integration-based method when the extrinsic noise is considered.

  9. A mathematical simulation approach to testing innovative models of dental education. (United States)

    Tennant, Marc; Kruger, Estie


    A combination of the increasing costs associated with providing a complex clinical program and an ever-reducing education-based income finds dental schools throughout Australia continuing to face serious financial risk. Even more important is the growing workforce crisis in academic staffing faced in almost all dental schools as the impact of the widening gap between private practice incomes and academic remuneration takes effect. This study developed a model of core variables and their relationship that was then transformed into a mathematical simulation tool that can be applied to test various scenarios and variable changes. The simulation model was tested against a theoretical dental education arrangement and found that this arrangement was a commercially viable pathway for new providers to enter the dental education market. This type of mathematical simulation approach is an important technique for analysis of the complex financial and operational management of modern dental schools.

  10. Simulations

    CERN Document Server

    Ngada, Narcisse


    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  11. A pseudo-spectral method for the simulation of poro-elastic seismic wave propagation in 2D polar coordinates using domain decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Sidler, Rolf, E-mail: [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)


    We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.

  12. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    Directory of Open Access Journals (Sweden)

    Sungwook Kim


    Full Text Available Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  13. Adaptive life simulator: A novel approach to modeling the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Kangas, L.J.; Keller, P.E.; Hashem, S. [and others


    In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  14. Simulation and prediction of protein production in fed-batch E. coli cultures: An engineering approach. (United States)

    Calleja, Daniel; Kavanagh, John; de Mas, Carles; López-Santín, Josep


    An overall model describing the dynamic behavior of fed-batch E. coli processes for protein production has been built, calibrated and validated. Using a macroscopic approach, the model consists of three interconnected blocks allowing simulation of biomass, inducer and protein concentration profiles with time. The model incorporates calculation of the extra and intracellular inducer concentration, as well as repressor-inducer dynamics leading to a successful prediction of the product concentration. The parameters of the model were estimated using experimental data of a rhamnulose-1-phosphate aldolase-producer strain, grown under a wide range of experimental conditions. After validation, the model has successfully predicted the behavior of different strains producing two different proteins: fructose-6-phosphate aldolase and ω-transaminase. In summary, the presented approach represents a powerful tool for E. coli production process simulation and control. © 2015 Wiley Periodicals, Inc.

  15. Adaptive MANET multipath routing algorithm based on the simulated annealing approach. (United States)

    Kim, Sungwook


    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  16. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    Directory of Open Access Journals (Sweden)

    Z. Hashemiyan


    Full Text Available Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort.

  17. Pilot workload during approaches: comparison of simulated standard and noise-abatement profiles. (United States)

    Elmenhorst, Eva-Maria; Vejvoda, Martin; Maass, Hartmut; Wenzel, Jürgen; Plath, Gernot; Schubert, Ekkehart; Basner, Mathias


    A new noise-reduced landing approach was tested--a Segmented Continuous Descent Approach (SCDA)-with regard to the resulting workload on pilots. Workload of 40 pilots was measured using physiological (heart rate, blood pressure, blink frequency, saliva cortisol concentration) and psychological (fatigue, sleepiness, tension, and task load) parameters. Approaches were conducted in A320 and A330 full-flight simulators during night shift. SCDA was compared to the standard Low Drag Low Power (LDLP) procedure as reference. Mean heart rate and blood pressure during the SCDA were not elevated, but were partly, even significantly, reduced (on average by 5 bpm and 4 mmHg from the flying captain). Cortisol levels did not change significantly with mean values of 0.9 to 1.2 ng ml(-1). Landing was the most demanding segment of both approaches as indicated by significant increases in heart rate and decreases in blink frequency. Subjective task load was low. Both approach procedures caused a similar workload level. Interpreting the results, methodological limitations have to be considered, e.g., the artificial and controlled airspace situation in the flight simulator. Nevertheless, it can be concluded that under these ideal conditions, the SCDA is operable without a higher workload for pilots compared to the common LDLP.

  18. Microstructural and magnetic properties of thin obliquely deposited films: A simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Solovev, P.N., E-mail: [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Izotov, A.V. [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Belyaev, B.A. [Kirensky Institute of Physics, Siberian Branch of the Russian Academy of Sciences, 50/38, Akademgorodok, Krasnoyarsk 660036 (Russian Federation); Siberian Federal University, 79, pr. Svobodnyi, Krasnoyarsk 660041 (Russian Federation); Reshetnev Siberian State Aerospace University, 31, pr. Imeni Gazety “Krasnoyarskii Rabochii”, Krasnoyarsk 660014 (Russian Federation)


    The relation between microstructural and magnetic properties of thin obliquely deposited films has been studied by means of numerical techniques. Using our developed simulation code based on ballistic deposition model and Fourier space approach, we have investigated dependences of magnetometric tensor components and magnetic anisotropy parameters on the deposition angle of the films. A modified Netzelmann approach has been employed to study structural and magnetic parameters of an isolated column in the samples with tilted columnar microstructure. Reliability and validity of used numerical methods is confirmed by a good agreement of the calculation results with each other, as well as with our experimental data obtained by the ferromagnetic resonance measurements of obliquely deposited thin Ni{sub 80}Fe{sub 20} films. The combination of these numerical methods can be used to design a magnetic film with a desirable value of uniaxial magnetic anisotropy and to extract the obliquely deposited film structure from only magnetic measurements. - Highlights: • We present a simulation approach to study a relation between structural and magnetic properties of oblique films. • The calculated dependence of magnetic anisotropy on a deposition angle accords well with the experiment. • A modified Netzelmann approach is proposed. • It allows for the computation of magnetic and structural parameters of an isolated column. • Proposed approach can be used for theoretical studies and for characterization of oblique films.

  19. A Proxy Outcome Approach for Causal Effect in Observational Studies: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Wenbin Liang


    Full Text Available Background. Known and unknown/unmeasured risk factors are the main sources of confounding effects in observational studies and can lead to false observations of elevated protective or hazardous effects. In this study, we investigate an alternative approach of analysis that is operated on field-specific knowledge rather than pure statistical assumptions. Method. The proposed approach introduces a proxy outcome into the estimation system. A proxy outcome possesses the following characteristics: (i the exposure of interest is not a cause for the proxy outcome; (ii causes of the proxy outcome and the study outcome are subsets of a collection of correlated variables. Based on these two conditions, the confounding-effect-driven association between the exposure and proxy outcome can then be measured and used as a proxy estimate for the effects of unknown/unmeasured confounders on the outcome of interest. Performance of this approach is tested by a simulation study, whereby 500 different scenarios are generated, with the causal factors of a proxy outcome and a study outcome being partly overlapped under low-to-moderate correlations. Results. The simulation results demonstrate that the conventional approach only led to a correct conclusion in 21% of the 500 scenarios, as compared to 72.2% for the alternative approach. Conclusion. The proposed method can be applied in observational studies in social science and health research that evaluates the health impact of behaviour and mental health problems.

  20. An open, object-based modeling approach for simulating subsurface heterogeneity (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.


    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  1. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva


    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  2. Fast simulation approaches for power fluctuation model of wind farm based on frequency domain

    DEFF Research Database (Denmark)

    Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang


    This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....

  3. A Hierarchical FEM approach for Simulation of Geometrical and Material induced Instability of Composite Structures

    DEFF Research Database (Denmark)

    Hansen, Anders L.; Lund, Erik; Pinho, Silvestre T.


    In this paper a hierarchical FE approach is utilized to simulate delamination in a composite plate loaded in uni-axial compression. Progressive delamination is modelled by use of cohesive interface elements that are automatically embedded. The non-linear problem is solved quasi-statically in whic...... the interaction between material degradation and structural instability is solved iteratively. The effect of fibre bridging is studied numerically and in-plane failure is predicted using physically based failure criteria....

  4. Comparision by Simulation of Different Approaches to the Urban Traffic Control

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Tichý, T.; Bělinová, Z.; Kapitán, J.


    Roč. 5, č. 4 (2012), s. 26-30 ISSN 1899-8208 R&D Projects: GA TA ČR TA01030603 Institutional support: RVO:67985556 Keywords : traffic * ITS * telematics * urban traffic control Subject RIV: BC - Control Systems Theory by simulation of different approaches to the urban traffic control.pdf

  5. A novel approach to simulate gene-environment interactions in complex diseases

    Directory of Open Access Journals (Sweden)

    Nicodemi Mario


    Full Text Available Abstract Background Complex diseases are multifactorial traits caused by both genetic and environmental factors. They represent the major part of human diseases and include those with largest prevalence and mortality (cancer, heart disease, obesity, etc.. Despite a large amount of information that has been collected about both genetic and environmental risk factors, there are few examples of studies on their interactions in epidemiological literature. One reason can be the incomplete knowledge of the power of statistical methods designed to search for risk factors and their interactions in these data sets. An improvement in this direction would lead to a better understanding and description of gene-environment interactions. To this aim, a possible strategy is to challenge the different statistical methods against data sets where the underlying phenomenon is completely known and fully controllable, for example simulated ones. Results We present a mathematical approach that models gene-environment interactions. By this method it is possible to generate simulated populations having gene-environment interactions of any form, involving any number of genetic and environmental factors and also allowing non-linear interactions as epistasis. In particular, we implemented a simple version of this model in a Gene-Environment iNteraction Simulator (GENS, a tool designed to simulate case-control data sets where a one gene-one environment interaction influences the disease risk. The main aim has been to allow the input of population characteristics by using standard epidemiological measures and to implement constraints to make the simulator behaviour biologically meaningful. Conclusions By the multi-logistic model implemented in GENS it is possible to simulate case-control samples of complex disease where gene-environment interactions influence the disease risk. The user has full control of the main characteristics of the simulated population and a Monte

  6. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Camelia Florela Voinea


    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.

  7. On Mechanism, Process and Polity: An Agent-Based Modeling and Simulation Approach

    Directory of Open Access Journals (Sweden)

    Voinea, Camelia Florela


    Full Text Available The present approach provides a theoretical account of political culture-based modeling of political change phenomena. Our approach is an agent-based simulation model inspired by a social-psychological account of the relation between the individual agents (citizens and the polity. It includes political culture as a fundamental modeling dimension. On this background, we reconsider the operational definitions of agent, mechanism, process, and polity so as to specify the role they play in the modeling of political change phenomena. We evaluate our previous experimental simulation experience in corruption emergence and political attitude change. The paper approaches the artificial polity as a political culture-based model of a body politic. It involves political culture concepts to account for the complexity of domestic political phenomena, going from political attitude change at the individual level up to major political change at the societal level. Architecture, structure, unit of interaction, generative mechanisms and processes are described. Both conceptual and experimental issues are described so as to highlight the differences between the simulation models of society and polity.  

  8. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter


    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  9. Fast geometric sensitivity analysis in hemodynamic simulations using a machine learning approach (United States)

    Sankaran, Sethuraman; Grady, Leo; Taylor, Charles


    In the cardiovascular system, blood flow rate, velocities and blood pressure are governed by the Navier-Stokes equations. Inputs to the system such as (a) geometry of arterial tree, (b) clinically measured blood pressure and viscosity, (c) boundary resistances, among others, are typically uncertain. Due to a large number of such parameters, there is a need to efficiently quantify uncertainty in solution fields in this multi-parameter space. We use a machine learning approach to approximate the simulation-based solution. Using an offline database of pre-computed solutions, we compute a map (rule) from the features to solution fields. This is coupled to an adaptive stochastic collocation method to quantify uncertainties in input parameters. We achieve significant speed-up (~1000 fold) by approximating the simulation-based solution using a machine learning predictor. Bagged decision tree was found to be the best predictor among many candidate regressors (correlation coefficient ~0.92). The sensitivities obtained using machine learning approach has a correlation coefficient of 0.91 with those obtained using finite element simulations. We also calculated and ranked the impact of different inputs such as problem geometry, and clinical parameters. We observed that the impact of geometry supersedes the impact of other variables. Mostly, segments with significant disease in the larger arteries had the highest sensitivities. We were able to localize sensitive regions in long segments with a focal disease using a multi-resolution approach.

  10. A Coupled Multiphysics Approach for Simulating Induced Seismicity, Ground Acceleration and Structural Damage (United States)

    Podgorney, Robert; Coleman, Justin; Wilkins, Amdrew; Huang, Hai; Veeraraghavan, Swetha; Xia, Yidong; Permann, Cody


    Numerical modeling has played an important role in understanding the behavior of coupled subsurface thermal-hydro-mechanical (THM) processes associated with a number of energy and environmental applications since as early as the 1970s. While the ability to rigorously describe all key tightly coupled controlling physics still remains a challenge, there have been significant advances in recent decades. These advances are related primarily to the exponential growth of computational power, the development of more accurate equations of state, improvements in the ability to represent heterogeneity and reservoir geometry, and more robust nonlinear solution schemes. The work described in this paper documents the development and linkage of several fully-coupled and fully-implicit modeling tools. These tools simulate: (1) the dynamics of fluid flow, heat transport, and quasi-static rock mechanics; (2) seismic wave propagation from the sources of energy release through heterogeneous material; and (3) the soil-structural damage resulting from ground acceleration. These tools are developed in Idaho National Laboratory's parallel Multiphysics Object Oriented Simulation Environment, and are integrated together using a global implicit approach. The governing equations are presented, the numerical approach for simultaneously solving and coupling the three coupling physics tools is discussed, and the data input and output methodology is outlined. An example is presented to demonstrate the capabilities of the coupled multiphysics approach. The example involves simulating a system conceptually similar to the geothermal development in Basel Switzerland, and the resultant induced seismicity, ground motion and structural damage is predicted.

  11. Numerical and experimental approaches to simulate soil clogging in porous media (United States)

    Kanarska, Yuliya; LLNL Team


    Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. The Department of Homeland Security Science and Technology Directorate provided funding for this research.

  12. System-of-Systems Approach for Integrated Energy Systems Modeling and Simulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Saurabh; Ruth, Mark; Pratt, Annabelle; Lunacek, Monte; Krishnamurthy, Dheepak; Jones, Wesley


    Today’s electricity grid is the most complex system ever built—and the future grid is likely to be even more complex because it will incorporate distributed energy resources (DERs) such as wind, solar, and various other sources of generation and energy storage. The complexity is further augmented by the possible evolution to new retail market structures that provide incentives to owners of DERs to support the grid. To understand and test new retail market structures and technologies such as DERs, demand-response equipment, and energy management systems while providing reliable electricity to all customers, an Integrated Energy System Model (IESM) is being developed at NREL. The IESM is composed of a power flow simulator (GridLAB-D), home energy management systems implemented using GAMS/Pyomo, a market layer, and hardware-in-the-loop simulation (testing appliances such as HVAC, dishwasher, etc.). The IESM is a system-of-systems (SoS) simulator wherein the constituent systems are brought together in a virtual testbed. We will describe an SoS approach for developing a distributed simulation environment. We will elaborate on the methodology and the control mechanisms used in the co-simulation illustrated by a case study.

  13. Quantitative spectral comparison by weighted spectral difference for protein higher order structure confirmation. (United States)

    Dinh, Nikita N; Winn, Bradley C; Arthur, Kelly K; Gabrielson, John P


    Previously, different approaches of spectral comparison were evaluated, and the spectral difference (SD) method was shown to be valuable for its linearity with spectral changes and its independence on data spacing (Anal. Biochem. 434 (2013) 153-165). In this note, we present an enhancement of the SD calculation, referred to as the "weighted spectral difference" (WSD), by implementing a weighting function based on relative signal magnitude. While maintaining the advantages of the SD method, WSD improves the method sensitivity to spectral changes and tolerance for baseline inclusion. Furthermore, a generalized formula is presented to unify further development of approaches to quantify spectral difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Monte Carlo Simulations of PAC-Spectra as a General Approach to Dynamic Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Danielsen, Eva; Jorgensen, Lars Elkjaer; Sestoft, Peter [Royal Veterinary and Agricultural University, Department of Mathematics and Physics (Denmark)


    Time Dependent Perturbed Angular Correlations of {gamma}-rays (PAC) can be used to study hyperfine interactions of a dynamic nature. However, the exact effect of the dynamic interaction on the PAC-spectrum is sometimes difficult to derive analytically. A new approach based on Monte Carlo simulations is therefore suggested, here implemented as a Fortran 90 program for simulating PAC spectra of dynamic electric field gradients of any origin. The program is designed for the most common experimental condition where the intermediate level has spin 5/2, but the approach can equally well be used for other spin states. Codes for 4 different situations have been developed: (1) Rotational diffusion by jumps; used as a test case. (2) Jumps between two states with different electric field gradients, different lifetimes and different orientations of the electric field gradient principal axes. (3) Relaxation of one state to another. (4) Molecules adhering to a surface with random rotational jumps around the axis perpendicular to the surface. To illustrate how this approach can be used to improve data-interpretation, previously published data on {sup 111m}Cd-plastocyanin and {sup 111}Ag-plastocyanin are re-considered. The strength of this novel approach is its simplicity and generality so that other dynamic processes can easily be included by only adding new program units describing the random process behind the dynamics. The program is hereby made publicly available.

  15. A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems (United States)

    Foit, K.; Gwiazda, A.; Banaś, W.


    The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.

  16. CATHARE Approach Recommended by EDF/SEPTEN for Training (or other) Simulators

    International Nuclear Information System (INIS)

    Pentori, B.; Iffeneckeft, F.; Poizat, F.


    This paper describes EDF's approach to NSSS thermal-hydraulics - this is the crucial module in a real-time simulator (this constraint relaxes requirements in respect of neutronics) because it determines the simulator's scope of application. The approach has involved several stages: (1) Existing full-scalers (1980-85 design), equipped with a five-equation primary model (about 40 nodes), coupled with a three-equation axial model of the SG secondary side (plus a very simple model for refilling/venting and draining), which can simulate only a small, 2-inch LOCA and up to 15 bar primary-system pressure; (2) SIPA(CT) and the new full-scalers at Fessenheim and Bugey (1990-95 design). These tools feature Cathare-Simu, an outgrowth of CATHARE 1 (six primary-system equations, four secondary-side equations, at least 187 nodes - extended to the steam header, implicit digital processing, possible parallelisation): this model permits simulation of breaks of up to 12 inches and at very low primary-system pressure; (3) SCAR (1995-2000 design) will be adapted from the CATHARE 2 design code (six equations everywhere, non condensables, 2D and 3D modules), and will allow simulator processing of all operating conditions (except for a severe accident, in the strict sense of core melt), including scenarios based on 481 broken primary piping, at atmospheric pressure. Only the fine-modelling capabilities of CATHARE make it possible to add genuine echographies to the traditional Man Machine Interface. (author)

  17. Towards socio-material approaches in simulation-based education: lessons from complexity theory. (United States)

    Fenwick, Tara; Dahlgren, Madeleine Abrandt


    Review studies of simulation-based education (SBE) consistently point out that theory-driven research is lacking. The literature to date is dominated by discourses of fidelity and authenticity - creating the 'real' - with a strong focus on the developing of clinical procedural skills. Little of this writing incorporates the theory and research proliferating in professional studies more broadly, which show how professional learning is embodied, relational and situated in social - material relations. A key concern for medical educators concerns how to better prepare students for the unpredictable and dynamic ambiguity of professional practice; this has stimulated the movement towards socio-material theories in education that address precisely this question. Among the various socio-material theories that are informing new developments in professional education, complexity theory has been of particular importance for medical educators interested in updating current practices. This paper outlines key elements of complexity theory, illustrated with examples from empirical study, to argue its particular relevance for improving SBE. Complexity theory can make visible important material dynamics, and their problematic consequences, that are not often noticed in simulated experiences in medical training. It also offers conceptual tools that can be put to practical use. This paper focuses on concepts of emergence, attunement, disturbance and experimentation. These suggest useful new approaches for designing simulated settings and scenarios, and for effective pedagogies before, during and following simulation sessions. Socio-material approaches such as complexity theory are spreading through research and practice in many aspects of professional education across disciplines. Here, we argue for the transformative potential of complexity theory in medical education using simulation as our focus. Complexity tools open questions about the socio-material contradictions inherent in

  18. Multibunch and multiparticle simulation code with an alternative approach to wakefield effects

    Directory of Open Access Journals (Sweden)

    M. Migliorati


    Full Text Available The simulation of beam dynamics in the presence of collective effects requires a strong computational effort to take into account, in a self-consistent way, the wakefield acting on a given charge and produced by all the others. Generally this is done by means of a convolution integral or sum. Moreover, if the electromagnetic fields consist of resonant modes with high quality factors, responsible, for example, for coupled bunch instabilities, a charge is also affected by itself in previous turns, and a very long record of wakefield must be properly taken into account. In this paper we present a new simulation code for the longitudinal beam dynamics in a circular accelerator, which exploits an alternative approach to the currently used convolution sum, reducing the computing time and avoiding the issues related to the length of wakefield for coupled bunch instabilities. With this approach it is possible to simulate, without the need for large computing power, simultaneously, the single and multibunch beam dynamics including intrabunch motion. Moreover, for a given machine, generally both the coupling impedance and the wake potential of a short Gaussian bunch are known. However, a classical simulation code needs in input the so-called “Green” function, that is the wakefield produced by a point charge, making necessary some manipulations to use the wake potential instead of the Green function. The method that we propose does not need the wakefield as input, but a particular fitting of the coupling impedance requiring the use of the resonator impedance model, thus avoiding issues related to the knowledge of the Green function. The same approach can also be applied to the transverse case and to linear accelerators as well.

  19. Modeling and analysis of a decentralized electricity market: An integrated simulation/optimization approach

    International Nuclear Information System (INIS)

    Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan


    In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.

  20. The Romulus cosmological simulations: a physical approach to the formation, dynamics and accretion models of SMBHs (United States)

    Tremmel, M.; Karcher, M.; Governato, F.; Volonteri, M.; Quinn, T. R.; Pontzen, A.; Anderson, L.; Bellovary, J.


    We present a novel implementation of supermassive black hole (SMBH) formation, dynamics and accretion in the massively parallel tree+SPH code, ChaNGa. This approach improves the modelling of SMBHs in fully cosmological simulations, allowing for a more detailed analysis of SMBH-galaxy co-evolution throughout cosmic time. Our scheme includes novel, physically motivated models for SMBH formation, dynamics and sinking timescales within galaxies and SMBH accretion of rotationally supported gas. The sub-grid parameters that regulate star formation (SF) and feedback from SMBHs and SNe are optimized against a comprehensive set of z = 0 galaxy scaling relations using a novel, multidimensional parameter search. We have incorporated our new SMBH implementation and parameter optimization into a new set of high-resolution, large-scale cosmological simulations called Romulus. We present initial results from our flagship simulation, Romulus25, showing that our SMBH model results in SF efficiency, SMBH masses and global SF and SMBH accretion histories at high redshift that are consistent with observations. We discuss the importance of SMBH physics in shaping the evolution of massive galaxies and show how SMBH feedback is much more effective at regulating SF compared to SNe feedback in this regime. Further, we show how each aspect of our SMBH model impacts this evolution compared to more common approaches. Finally, we present a science application of this scheme studying the properties and time evolution of an example dual active galactic nucleus system, highlighting how our approach allows simulations to better study galaxy interactions and SMBH mergers in the context of galaxy-BH co-evolution.

  1. Prediction of Osmotic Pressure of Ionic Liquids Inside a Nanoslit by MD Simulation and Continuum Approach (United States)

    Moon, Gi Jong; Yang, Yu Dong; Oh, Jung Min; Kang, In Seok


    Osmotic pressure plays an important role in the processes of charging and discharging of lithium batteries. In this work, osmotic pressure of the ionic liquids confined inside a nanoslit is calculated by using both MD simulation and continuum approach. In the case of MD simulation, an ionic liquid is modeled as singly charged spheres with a short-ranged repulsive Lennard-Jones potential. The radii of the spheres are 0.5nm, reflecting the symmetry of ion sizes for simplicity. The simulation box size is 11nm×11nm×7.5nm with 1050 ion pairs. The concentration of ionic liquid is about 1.922mol/L, and the total charge on an individual wall varies from +/-60e(7.944 μm/cm2) to +/-600e(79.44 μm/cm2) . In the case of continuum approach, we classify the problems according to the correlation length and steric factor, and considered the four separate cases: 1) zero correlation length and zero steric factor, 2) zero correlation length and non-zero steric factor, 3) non-zero correlation length and zero steric factor, and 4) non-zero correlation and non-zero steric factor. Better understanding of the osmotic pressure of ionic liquids confined inside a nanoslit can be achieved by comparing the results of MD simulation and continuum approach. This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP: Ministry of Science, ICT & Future Planning) (No. 2017R1D1A1B05035211).

  2. Tree species mapping in tropical forests using multi-temporal imaging spectroscopy: Wavelength adaptive spectral mixture analysis (United States)

    Somers, B.; Asner, G. P.


    The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among co-existing species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection in Multiple Endmember Spectral Mixture Analysis (MESMA). The temporal analysis provided a way to incorporate species phenology while feature selection indicated the best phenological time and best spectral feature set to optimize the separability between tree species. Instead of using the same set of spectral bands throughout the image which is the standard approach in MESMA, our modified Wavelength Adaptive Spectral Mixture Analysis (WASMA) approach allowed the spectral subsets to vary on a per pixel basis. As such we were able to optimize the spectral separability between the tree species present in each pixel. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively assessed using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, WASMA provided a more accurate tree species map compared to conventional MESMA (Kappa = 0.54; p-value < 0.05. The flexible or adaptive use of band sets in WASMA provides an interesting avenue to address spectral similarities in complex vegetation canopies.

  3. A Companion Model Approach to Modelling and Simulation of Industrial Processes

    International Nuclear Information System (INIS)

    Juslin, K.


    Modelling and simulation provides for huge possibilities if broadly taken up by engineers as a working method. However, when considering the launching of modelling and simulation tools in an engineering design project, they shall be easy to learn and use. Then, there is no time to write equations, to consult suppliers' experts, or to manually transfer data from one tool to another. The answer seems to be in the integration of easy to use and dependable simulation software with engineering tools. Accordingly, the modelling and simulation software shall accept as input such structured design information on industrial unit processes and their connections, as provided for by e.g. CAD software and product databases. The software technology, including required specification and communication standards, is already available. Internet based service repositories make it possible for equipment manufacturers to supply 'extended products', including such design data as needed by engineers engaged in process and automation integration. There is a market niche evolving for simulation service centres, operating in co-operation with project consultants, equipment manufacturers, process integrators, automation designers, plant operating personnel, and maintenance centres. The companion model approach for specification and solution of process simulation models, as presented herein, is developed from the above premises. The focus is on how to tackle real world processes, which from the modelling point of view are heterogeneous, dynamic, very stiff, very nonlinear and only piece vice continuous, without extensive manual interventions of human experts. An additional challenge, to solve the arising equations fast and reliable, is dealt with, as well. (orig.)

  4. Development and implementation of a clinical pathway approach to simulation-based training for foregut surgery. (United States)

    Miyasaka, Kiyoyuki W; Buchholz, Joseph; LaMarra, Denise; Karakousis, Giorgos C; Aggarwal, Rajesh


    Contemporary demands on resident education call for integration of simulation. We designed and implemented a simulation-based curriculum for Post Graduate Year 1 surgery residents to teach technical and nontechnical skills within a clinical pathway approach for a foregut surgery patient, from outpatient visit through surgery and postoperative follow-up. The 3-day curriculum for groups of 6 residents comprises a combination of standardized patient encounters, didactic sessions, and hands-on training. The curriculum is underpinned by a summative simulation "pathway" repeated on days 1 and 3. The "pathway" is a series of simulated preoperative, intraoperative, and postoperative encounters in following up a single patient through a disease process. The resident sees a standardized patient in the clinic presenting with distal gastric cancer and then enters an operating room to perform a gastrojejunostomy on a porcine tissue model. Finally, the resident engages in a simulated postoperative visit. All encounters are rated by faculty members and the residents themselves, using standardized assessment forms endorsed by the American Board of Surgery. A total of 18 first-year residents underwent this curriculum. Faculty ratings of overall operative performance significantly improved following the 3-day module. Ratings of preoperative and postoperative performance were not significantly changed in 3 days. Resident self-ratings significantly improved for all encounters assessed, as did reported confidence in meeting the defined learning objectives. Conventional surgical simulation training focuses on technical skills in isolation. Our novel "pathway" curriculum targets an important gap in training methodologies by placing both technical and nontechnical skills in their clinical context as part of managing a surgical patient. Results indicate consistent improvements in assessments of performance as well as confidence and support its continued usage to educate surgery residents

  5. A random generation approach to pattern library creation for full chip lithographic simulation (United States)

    Zou, Elain; Hong, Sid; Liu, Limei; Huang, Lucas; Yang, Legender; Kabeel, Aliaa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Du, Chunshan; Hu, Xinyi; Wan, Qijian; Zhang, Recoo


    As technology advances, the need for running lithographic (litho) checking for early detection of hotspots before tapeout has become essential. This process is important at all levels—from designing standard cells and small blocks to large intellectual property (IP) and full chip layouts. Litho simulation provides high accuracy for detecting printability issues due to problematic geometries, but it has the disadvantage of slow performance on large designs and blocks [1]. Foundries have found a good compromise solution for running litho simulation on full chips by filtering out potential candidate hotspot patterns using pattern matching (PM), and then performing simulation on the matched locations. The challenge has always been how to easily create a PM library of candidate patterns that provides both comprehensive coverage for litho problems and fast runtime performance. This paper presents a new strategy for generating candidate real design patterns through a random generation approach using a layout schema generator (LSG) utility. The output patterns from the LSG are simulated, and then classified by a scoring mechanism that categorizes patterns according to the severity of the hotspots, probability of their presence in the design, and the likelihood of the pattern causing a hotspot. The scoring output helps to filter out the yield problematic patterns that should be removed from any standard cell design, and also to define potential problematic patterns that must be simulated within a bigger context to decide whether or not they represent an actual hotspot. This flow is demonstrated on SMIC 14nm technology, creating a candidate hotspot pattern library that can be used in full chip simulation with very high coverage and robust performance.

  6. Numerical simulations of cloud rise phenomena associated with nuclear bursts: compressible and low Mach approaches (United States)

    Kanarska, Y.; Lomov, I.; Antoun, T.


    The nuclear cloud rise is a two stage phenomenon. The initial phase (fireball expansion) of the cloud formation is dominated by compressible flow effects and propagation of shock waves. At the later stage, shock waves become weak, the Mach number decreases and the time steps required by an explicit code to model the acoustic waves make simulation of the late time cloud dynamics with a compressible code very expensive. The buoyant cloud rise at this stage can be efficiently simulated by low Mach-number approximation. In this approach acoustic waves are removed analytically, compressible effects are included as a non-zero divergence constraint due to background stratification and the system of equations is solved implicitly using pressure projection methods. Our numerical approach includes fluid mechanical models that are able to simulate both compressible, incompressible and low Mach regimes. Compressible dynamics is simulated with the explicit high order Eulerian code GEODYN (Lomov et al., 2001). It is based on the second-order Godunov method of Colella and Woodward (1984) that is extended for multiple dimensions using operator-splitting. The code includes the material interface tracking based on a volume-of-fluid (VOF) approach of Miller and Puckett (1996). The code we use for the low Mach approximation (LMC) is based on the incompressible solver of Bell et al., (2003). An unsplit second-order Godunov method and the MAC projection method (Bell et al., 2003) are used. An algebraic slip multiphase model is implemented to describe fallout of dust particles. Both codes incorporate adaptive mesh refinement (AMR). Additionally, the codes are explicitly coupled via input/output files. First, we compared solutions for an idealized buoyant bubble rise problem, that is characterized by low Mach numbers, in GEODYN and LMC codes. While the cloud evolution process is reproduced in both codes, some differences are found in the cloud rise speed and the cloud interface structure

  7. CFD spray simulations for nuclear reactor safety applications with Lagrangian approach for droplet modelling

    International Nuclear Information System (INIS)

    Babic, M.; Kljenak, I.


    The purposes of containment spray system operation during a severe accident in a light water reactor (LWR) nuclear power plant (NPP) are to depressurize the containment by steam condensation on spray droplets, to reduce the risk of hydrogen burning by mixing the containment atmosphere, and to collect radioactive aerosols from the containment atmosphere. While the depressurization may be predicted fairly well using lumped-parameter codes, the prediction of mixing and collection of aerosols requires a local description of transport phenomena. In the present work, modelling of sprays on local instantenous scale is presented and the Design of Experiment (DOE) method is used to assess the influence of boundary conditions on the simulation results. Simulation results are compared to the TOSQAN 101 spray test, which was used for a benchmarking exercise in the European Severe accident research network of excellence (SARNET). The modelling approach is based on a Lagrangian description of the dispersed liquid phase (droplets), an Eulerian approach for the description of the continuous gas phase, and a two-way interaction between the phases. The simulations are performed using a combination of the computational fluid dynamics (CFD) code CFX4.4, which solves the gas transport equations, and of a newly proposed dedicated Lagrangian droplet-tracking code. (author)

  8. An evaluation of numerical approaches for S-wave component simulation in rock blasting

    Directory of Open Access Journals (Sweden)

    Qidong Gao


    Full Text Available The shear wave (S-wave component of the total blast vibration always plays an important role in damage to rock or adjacent structures. Numerical approach has been considered as an economical and effective tool in predicting blast vibration. However, S-wave has not yet attracted enough attention in previous numerical simulations. In this paper, three typical numerical models, i.e. the continuum-based elastic model, the continuum-based damage model, and the coupled smooth particle hydrodynamics (SPH-finite element method (FEM model, were first introduced and developed to simulate the blasting of a single cylindrical charge. Then, the numerical results from different models were evaluated based on a review on the generation mechanisms of S-wave during blasting. Finally, some suggestions on the selection of numerical approaches for simulating generation of the blast-induced S-wave were put forward. Results indicate that different numerical models produce different results of S-wave. The coupled numerical model was the best, for its outstanding capacity in producing S-wave component. It is suggested that the model that can describe the cracking, sliding or heaving of rock mass, and the movement of fragments near the borehole should be selected preferentially, and priority should be given to the material constitutive law that could record the nonlinear mechanical behavior of rock mass near the borehole.

  9. Development of a Numerical Approach to Simulate Compressed Air Energy Storage Subjected to Cyclic Internal Pressure

    Directory of Open Access Journals (Sweden)

    Song-Hun Chong


    Full Text Available This paper analyzes the long-term response of unlined energy storage located at shallow depth to improve the distance between a wind farm and storage. The numerical approach follows the hybrid scheme that combined a mechanical constitutive model to extract stress and strains at the first cycle and polynomial-type strain accumulation functions to track the progressive plastic deformation. In particular, the strain function includes the fundamental features that requires simulating the long-term response of geomaterials: volumetric strain (terminal void ratio and shear strain (shakedown and ratcheting, the strain accumulation rate, and stress obliquity. The model is tested with a triaxial strain boundary condition under different stress obliquities. The unlined storage subjected to cyclic internal stress is simulated with different storage geometries and stress amplitudes that play a crucial role in estimating the long-term mechanical stability of underground storage. The simulations present the evolution of ground surface, yet their incremental rate approaches towards a terminal void ratio. With regular and smooth displacement fields for the large number of cycles, the inflection point is estimated with the previous surface settlement model.

  10. Improving advanced cardiovascular life support skills in medical students: simulation-based education approach

    Directory of Open Access Journals (Sweden)

    Hamidreza Reihani


    Full Text Available Objective: In this trial, we intend to assess the effect of simulation-based education approach on advanced cardiovascular life support skills among medical students. Methods: Through convenient sampling method, 40 interns of Mashhad University of Medical Sciences in their emergency medicine rotation (from September to December 2012 participated in this study. Advanced Cardiovascular Life Support (ACLS workshops with pretest and post-test exams were performed. Workshops and checklists for pretest and post-test exams were designed according to the latest American Heart Association (AHA guidelines. Results: The total score of the students increased significantly after workshops (24.6 out of 100 to 78.6 out of 100. This demonstrates 53.9% improvement in the skills after the simulation-based education (P< 0.001. Also the mean score of each station had a significant improvement (P< 0.001. Conclusion: Pretests showed that interns had poor performance in practical clinical matters while their scientific knowledge, such as ECG interpretation was acceptable. The overall results of the study highlights that Simulation based-education approach is highly effective in Improving ACLS skills among medical students.

  11. The fuel cell model of abiogenesis: a new approach to origin-of-life simulations. (United States)

    Barge, Laura M; Kee, Terence P; Doloboff, Ivria J; Hampton, Joshua M P; Ismail, Mohammed; Pourkashanian, Mohamed; Zeytounian, John; Baum, Marc M; Moss, John A; Lin, Chung-Kuang; Kidd, Richard D; Kanik, Isik


    In this paper, we discuss how prebiotic geo-electrochemical systems can be modeled as a fuel cell and how laboratory simulations of the origin of life in general can benefit from this systems-led approach. As a specific example, the components of what we have termed the "prebiotic fuel cell" (PFC) that operates at a putative Hadean hydrothermal vent are detailed, and we used electrochemical analysis techniques and proton exchange membrane (PEM) fuel cell components to test the properties of this PFC and other geo-electrochemical systems, the results of which are reported here. The modular nature of fuel cells makes them ideal for creating geo-electrochemical reactors with which to simulate hydrothermal systems on wet rocky planets and characterize the energetic properties of the seafloor/hydrothermal interface. That electrochemical techniques should be applied to simulating the origin of life follows from the recognition of the fuel cell-like properties of prebiotic chemical systems and the earliest metabolisms. Conducting this type of laboratory simulation of the emergence of bioenergetics will not only be informative in the context of the origin of life on Earth but may help in understanding whether life might emerge in similar environments on other worlds.

  12. An Automatic Approach to the Stabilization Condition in a HIx Distillation Simulation

    International Nuclear Information System (INIS)

    Chang, Ji Woon; Shin, Young Joon; Lee, Ki Young; Kim, Yong Wan; Chang, Jong Hwa; Youn, Cheung


    In the Sulfur-Iodine(SI) thermochemical process to produce nuclear hydrogen, an H 2 O-HI-I 2 ternary mixture solution discharged from the Bunsen reaction is primarily concentrated by electro-electrodialysis. The concentrated solution is distillated in the HIx distillation column to generate a high purity HI vapor. The pure HI vapor is obtained at the top of the HIx distillation column and the diluted HIx solution is discharged at the bottom of the column. In order to simulate the steady-state HIx distillation column, a vapor-liquid equilibrium (VLE) model of the H 2 O-HI-I 2 ternary system is required and the subprogram to calculate VLE concentrations has been already introduced by KAERI research group in 2006. The steady state simulation code for the HIx distillation process was also developed in 2007. However, the intrinsic phenomena of the VLE data such as the steep slope of a T-x-y diagram caused the instability of the simulation calculation. In this paper, a computer program to automatically find a stabilization condition in the steady state simulation of the HIx distillation column is introduced. A graphic user interface (GUI) function to monitor an approach to the stabilization condition was added in this program

  13. A Green's Function Approach to Simulate DNA Damage by the Indirect Effect (United States)

    Plante, Ianik; Cicinotta, Francis A.


    The DNA damage is of fundamental importance in the understanding of the effects of ionizing radiation. DNA is damaged by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research, many questions on the DNA damage by ionizing radiation remains. In the recent years, the Green's functions of the diffusion equation (GFDE) have been used extensively in biochemistry [1], notably to simulate biochemical networks in time and space [2]. In our future work on DNA damage, we wish to use an approach based on the GFDE to refine existing models on the indirect effect of ionizing radiation on DNA. To do so, we will use the code RITRACKS [3] developed at the NASA Johnson Space Center to simulate the radiation track structure and calculate the position of radiolytic species after irradiation. We have also recently developed an efficient Monte-Carlo sampling algorithm for the GFDE of reversible reactions with an intermediate state [4], which can be modified and adapted to simulate DNA damage by free radicals. To do so, we will use the known reaction rate constants between radicals (OH, eaq, H,...) and the DNA bases, sugars and phosphates and use the sampling algorithms to simulate the diffusion of free radicals and chemical reactions with DNA. These techniques should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.

  14. An Integrated Model for Simulating Regional Water Resources Based on Total Evapotranspiration Control Approach

    Directory of Open Access Journals (Sweden)

    Jianhua Wang


    Full Text Available Total evapotranspiration and water consumption (ET control is considered an efficient method for water management. In this study, we developed a water allocation and simulation (WAS model, which can simulate the water cycle and output different ET values for natural and artificial water use, such as crop evapotranspiration, grass evapotranspiration, forest evapotranspiration, living water consumption, and industry water consumption. In the calibration and validation periods, a “piece-by-piece” approach was used to evaluate the model from runoff to ET data, including the remote sensing ET data and regional measured ET data, which differ from the data from the traditional hydrology method. We applied the model to Tianjin City, China. The Nash-Sutcliffe efficiency (Ens of the runoff simulation was 0.82, and its regression coefficient R2 was 0.92. The Nash-Sutcliffe Efficiency (Ens of regional total ET simulation was 0.93, and its regression coefficient R2 was 0.98. These results demonstrate that ET of irrigation lands is the dominant part, which accounts for 53% of the total ET. The latter is also a priority in ET control for water management.

  15. Events simulation production for the BaBar experiment using the grid approach content

    International Nuclear Information System (INIS)

    Fella, A.; Andreotti, D.; Luppi, E.


    The BaBar experiment is taking data since 1999, investigating the violation of charge and parity (CP) symmetry in the field of High Energy Physics. Event simulation is an intensive computing task, due to the complexity of algorithm based on Monte-Carlo method implemented using the GEANT engine. Data needed as input for the simulation, stored in the ROOT format, are classified into two categories: conditions data for describing the detector status when data are recorded, and background triggers data for including noise signal necessary to obtain a realistic simulation. In order to satisfy these requirements, in the traditional BaBar computing model events are distributed over several sites involved in the collaboration where each site manager centrally manages a private farm dedicated to simulation production. The new grid approach applied to the BaBar production framework is discussed along with the schema adopted for data deployment via Xrootd servers, including data management using grid middle ware on distributed storage facilities spread over the INFN-GRID network. A comparison between the two models is provided, describing also the custom application developed for performing the whole production task on the grid and showing results achieved. (Author)

  16. Simulating Controlled Radical Polymerizations with mcPolymer—A Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Georg Drache


    Full Text Available Utilizing model calculations may lead to a better understanding of the complex kinetics of the controlled radical polymerization. We developed a universal simulation tool (mcPolymer, which is based on the widely used Monte Carlo simulation technique. This article focuses on the software architecture of the program, including its data management and optimization approaches. We were able to simulate polymer chains as individual objects, allowing us to gain more detailed microstructural information of the polymeric products. For all given examples of controlled radical polymerization (nitroxide mediated radical polymerization (NMRP homo- and copolymerization, atom transfer radical polymerization (ATRP, reversible addition fragmentation chain transfer polymerization (RAFT, we present detailed performance analyses demonstrating the influence of the system size, concentrations of reactants, and the peculiarities of data. Different possibilities were exemplarily illustrated for finding an adequate balance between precision, memory consumption, and computation time of the simulation. Due to its flexible software architecture, the application of mcPolymer is not limited to the controlled radical polymerization, but can be adjusted in a straightforward manner to further polymerization models.

  17. On the generalization of the hazard rate twisting-based simulation approach

    KAUST Repository

    Rached, Nadhir B.


    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. A naive Monte Carlo simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. Most of these methods have thus far been proposed to deal with specific settings under which the RVs belong to particular classes of distributions. In this paper, we propose a generalization of the well-known hazard rate twisting Importance Sampling-based approach that presents the advantage of being logarithmic efficient for arbitrary sums of RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of the proposed method with some existing techniques.

  18. A hybrid FEM-DEM approach to the simulation of fluid flow laden with many particles (United States)

    Casagrande, Marcus V. S.; Alves, José L. D.; Silva, Carlos E.; Alves, Fábio T.; Elias, Renato N.; Coutinho, Alvaro L. G. A.


    In this work we address a contribution to the study of particle laden fluid flows in scales smaller than TFM (two-fluid models). The hybrid model is based on a Lagrangian-Eulerian approach. A Lagrangian description is used for the particle system employing the discrete element method (DEM), while a fixed Eulerian mesh is used for the fluid phase modeled by the finite element method (FEM). The resulting coupled DEM-FEM model is integrated in time with a subcycling scheme. The aforementioned scheme is applied in the simulation of a seabed current to analyze which mechanisms lead to the emergence of bedload transport and sediment suspension, and also quantify the effective viscosity of the seabed in comparison with the ideal no-slip wall condition. A simulation of a salt plume falling in a fluid column is performed, comparing the main characteristics of the system with an experiment.

  19. Investigation on pitch system loads by means of an integral multi body simulation approach (United States)

    Berroth, J.; Jacobs, G.; Kroll, T.; Schelenz, R.


    In modern horizontal axis wind turbines the rotor blades are adjusted by three individual pitch systems to control power output. The pitch system consists of either a hydraulic or an electrical actuator, the blade bearing, the rotor blade itself and the control. In case of an electrical drive a gearbox is used to transmit the high torques that are required for blade pitch angle adjustment. In this contribution a new integral multi body simulation approach is presented that enables detailed assessment of dynamic pitch system loads. The simulation results presented are compared and evaluated with measurement data of a 2 MW-class reference wind turbine. Major focus of this contribution is on the assessment of non linear tooth contact behaviour incorporating tooth backlash for the single gear stages and the impact on dynamic pitch system loads.

  20. A Unified Simulation Approach for the Fast Outage Capacity Evaluation over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.


    The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed, boils down to computing the cumulative distribution function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gains (equivalently squared enveloped/ amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. We develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte Carlo simulations.

  1. The Analysis of Rush Orders Risk in Supply Chain: A Simulation Approach (United States)

    Mahfouz, Amr; Arisha, Amr


    Satisfying customers by delivering demands at agreed time, with competitive prices, and in satisfactory quality level are crucial requirements for supply chain survival. Incidence of risks in supply chain often causes sudden disruptions in the processes and consequently leads to customers losing their trust in a company's competence. Rush orders are considered to be one of the main types of supply chain risks due to their negative impact on the overall performance, Using integrated definition modeling approaches (i.e. IDEF0 & IDEF3) and simulation modeling technique, a comprehensive integrated model has been developed to assess rush order risks and examine two risk mitigation strategies. Detailed functions sequence and objects flow were conceptually modeled to reflect on macro and micro levels of the studied supply chain. Discrete event simulation models were then developed to assess and investigate the mitigation strategies of rush order risks, the objective of this is to minimize order cycle time and cost.

  2. Simulation of the stochastic wave loads using a physical modeling approach

    DEFF Research Database (Denmark)

    Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.


    In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... an efficient way of simulating realizations of the stochastic load processes. However it requires many random variables, i.e. in the order of magnitude of 1000, to be included in the load model. Unfortunately having too many random variables in the problem makes considerable difficulties in analyzing system...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...

  3. A simulation-based approach for solving assembly line balancing problem (United States)

    Wu, Xiaoyu


    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  4. A unified simulation approach for the fast outage capacity evaluation over generalized fading channels

    KAUST Repository

    Rached, Nadhir B.


    The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when Equal Gain Combining (EGC) or Maximum Ratio Combining (MRC) diversity techniques are employed, boils down to computing the Cumulative Distribution Function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gain (equivalently squared enveloped/amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. In this paper, we develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte-Carlo simulations.

  5. A combined ADER-DG and PML approach for simulating wave propagation in unbounded domains

    KAUST Repository

    Amler, Thomas


    In this work, we present a numerical approach for simulating wave propagation in unbounded domains which combines discontinuous Galerkin methods with arbitrary high order time integration (ADER-DG) and a stabilized modification of perfectly matched layers (PML). Here, the ADER-DG method is applied to Bérenger’s formulation of PML. The instabilities caused by the original PML formulation are treated by a fractional step method that allows to monitor whether waves are damped in PML region. In grid cells where waves are amplified by the PML, the contribution of damping terms is neglected and auxiliary variables are reset. Results of 2D simulations in acoustic media with constant and discontinuous material parameters are presented to illustrate the performance of the method.

  6. A theoretical approach to room acoustic simulations based on a radiative transfer model

    DEFF Research Database (Denmark)

    Ruiz-Navarro, Juan-Miguel; Jacobsen, Finn; Escolano, José


    A theoretical approach to room acoustic simulations based on a radiative transfer model is developed by adapting the classical radiative transfer theory from optics to acoustics. The proposed acoustic radiative transfer model expands classical geometrical room acoustic modeling algorithms...... by incorporating a propagation medium that absorbs and scatters radiation, handling both diffuse and non-diffuse reflections on boundaries and objects in the room. The main scope of this model is to provide a proper foundation for a wide number of room acoustic simulation models, in order to establish and unify...... their principles. It is shown that this room acoustic modeling technique establishes the basis of two recently proposed algorithms, the acoustic diffusion equation and the room acoustic rendering equation. Both methods are derived in detail using an analytical approximation and a simplified integral equation...

  7. Modeling and Simulation Resource Repository (MSRR)(System Engineering/Integrated M&S Management Approach (United States)

    Milroy, Audrey; Hale, Joe


    NASA s Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model s fidelity, credibility, and quality, including the verification, validation and accreditation information. The NASA MSRR will be implemented leveraging M&S industry best practices. This presentation will discuss the requirements that will enable NASA to capture and make available the "meta data" or "simulation biography" data associated with a model. The presentation will also describe the requirements that drive how NASA will collect and document relevant information for models or suites of models in order to facilitate use and reuse of relevant models and provide visibility across NASA organizations and the larger M&S community.

  8. FENICIA: a generic plasma simulation code using a flux-independent field-aligned coordinate approach

    International Nuclear Information System (INIS)

    Hariri, Farah


    The primary thrust of this work is the development and implementation of a new approach to the problem of field-aligned coordinates in magnetized plasma turbulence simulations called the FCI approach (Flux-Coordinate Independent). The method exploits the elongated nature of micro-instability driven turbulence which typically has perpendicular scales on the order of a few ion gyro-radii, and parallel scales on the order of the machine size. Mathematically speaking, it relies on local transformations that align a suitable coordinate to the magnetic field to allow efficient computation of the parallel derivative. However, it does not rely on flux coordinates, which permits discretizing any given field on a regular grid in the natural coordinates such as (x, y, z) in the cylindrical limit. The new method has a number of advantages over methods constructed starting from flux coordinates, allowing for more flexible coding in a variety of situations including X-point configurations. In light of these findings, a plasma simulation code FENICIA has been developed based on the FCI approach with the ability to tackle a wide class of physical models. The code has been verified on several 3D test models. The accuracy of the approach is tested in particular with respect to the question of spurious radial transport. Tests on 3D models of the drift wave propagation and of the Ion Temperature Gradient (ITG) instability in cylindrical geometry in the linear regime demonstrate again the high quality of the numerical method. Finally, the FCI approach is shown to be able to deal with an X-point configuration such as one with a magnetic island with good convergence and conservation properties. (author) [fr

  9. A long-term, continuous simulation approach for large-scale flood risk assessments (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno


    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  10. Tomographic-spectral approach for dark matter detection in the cross-correlation between cosmic shear and diffuse γ-ray emission

    International Nuclear Information System (INIS)

    Camera, S.; Fornasa, M.; Fornengo, N.; Regis, M.


    We recently proposed to cross-correlate the diffuse extragalactic γ-ray background with the gravitational lensing signal of cosmic shear. This represents a novel and promising strategy to search for annihilating or decaying particle dark matter (DM) candidates. In the present work, we demonstrate the potential of a tomographic-spectral approach: measuring the cross-correlation in separate bins of redshift and energy significantly improves the sensitivity to a DM signal. Indeed, the technique proposed here takes advantage of the different scaling of the astrophysical and DM components with redshift and, simultaneously of their different energy spectra and different angular extensions. The sensitivity to a particle DM signal is extremely promising even when the DM-induced emission is quite faint. We first quantify the prospects of detecting DM by cross-correlating the Fermi Large Area Telescope (LAT) diffuse γ-ray background with the cosmic shear expected from the Dark Energy Survey. Under the hypothesis of a significant subhalo boost, such a measurement can deliver a 5σ detection of DM, if the DM particle is lighter than 300 GeV and has a thermal annihilation rate. We then forecast the capability of the European Space Agency Euclid satellite (whose launch is planned for 2020), in combination with an hypothetical future γ-ray detector with slightly improved specifications compared to current telescopes. We predict that the cross-correlation of their data will allow a measurement of the DM mass with an uncertainty of a factor of 1.5–2, even for moderate subhalo boosts, for DM masses up to few hundreds of GeV and thermal annihilation rates

  11. Comparison of pressure reconstruction approaches based on measured and simulated velocity fields

    Directory of Open Access Journals (Sweden)

    Manthey Samuel


    Full Text Available The pressure drop over a pathological vessel section can be used as an important diagnostic indicator. However, it cannot be measured non-invasively. Multiple approaches for pressure reconstruction based on velocity information are available. Regarding in-vivo data introducing uncertainty these approaches may not be robust and therefore validation is required. Within this study, three independent methods to calculate pressure losses from velocity fields were implemented and compared: A three dimensional and a one dimensional method based on the Pressure Poisson Equation (PPE as well as an approach based on the work-energy equation for incompressible fluids (WERP. In order to evaluate the different approaches, phantoms from pure Computational Fluid Dynamics (CFD simulations and in-vivo PC-MRI measurements were used. The comparison of all three methods reveals a good agreement with respect to the CFD pressure solutions for simple geometries. However, for more complex geometries all approaches lose accuracy. Hence, this study demonstrates the need for a careful selection of an appropriate pressure reconstruction algorithm.

  12. Symmetry breaking in frustrated XY models: Results from new self-consistent fluctuation approach and simulations (United States)

    Behzadi, Azad Esmailov


    The critical behavior of the fully frustrated XY model has remained controversial in spite of almost two decades of related research. In this study, we have developed a new method inspired by Netz and Berker's hard-spin mean- field theory. Our approach for XY models yields results consistent with Monte Carlo simulations as the ratio of antiferromagnetic to ferromagnetic interactions is varied. The method captures two phase transitions clearly separated in temperature for ratios of 0.5, 0.6, and 1.5, with these transitions moving closer together in temperature as the interaction ratio approaches 1.0, the fully frustrated case. From the system's chirality as a function of temperature in the critical region, we calculate the critical exponent β in agreement with an Ising transition for all of the interaction ratios studied, including 1.0. This result provides support for the view that there are two transitions, rather than one transition in a new universality class, occurring in the fully frustrated XY model. Finite size effects in this model can be essentially eliminated by rescaling the local magnetization, the quantity retained self- consistently in our computations. This rescaling scheme also shows excellent results when tested on the two- dimensional Ising model, and the method, as generalized, provides a framework for an analytical approach to complex systems. Monte Carlo simulations of the fully frustrated XY model in a magnetic field provide further evidence of two transitions. The magnetic field breaks the rotational symmetry of the model, but the two-fold chiral degeneracy of the ground state persists in the field. This lower degeneracy with the field present makes Monte Carlo simulations converge more rapidly. The critical exponent δ determined from the sublattice magnetizations as a function of field agrees with the value expected for a Kosterlitz-Thouless transition. Further, the zero-field specific heat obtained by extrapolation from simulations in a

  13. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators (United States)

    Snyder, David B.; Wolford, David S.


    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  14. A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations. (United States)

    Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin


    Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.

  15. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets (United States)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus


    of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  16. Hybrid spectral CT reconstruction (United States)

    Clark, Darin P.


    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  17. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  18. Full three-dimensional approach to the design and simulation of a radio-frequency quadrupole

    Directory of Open Access Journals (Sweden)

    B. Mustapha


    Full Text Available We have developed a new full 3D approach for the electromagnetic and beam dynamics design and simulation of a radio-frequency quadrupole (RFQ. A detailed full 3D model including vane modulation was simulated, which was made possible by the ever advancing computing capabilities. The electromagnetic (EM design approach was first validated using experimental measurements on an existing prototype RFQ and more recently on the actual full size RFQ. Two design options have been studied, the original with standard sinusoidal modulation over the full length of the RFQ; in the second design, a trapezoidal modulation was used in the accelerating section of the RFQ to achieve a higher energy gain for the same power and length. A detailed comparison of both options is presented supporting our decision to select the trapezoidal design. The trapezoidal modulation increased the shunt impedance of the RFQ by 34%, the output energy by 15% with a similar increase in the peak surface electric field, but practically no change in the dynamics of the accelerated beam. The beam dynamics simulations were performed using three different field methods. The first uses the standard eight-term potential to derive the fields, the second uses 3D fields from individual cell-by-cell models, and the third uses the 3D fields for the whole RFQ as a single cavity. A detailed comparison of the results from TRACK shows a very good agreement, validating the 3D fields approach used for the beam dynamics studies. The EM simulations were mainly performed using the CST Microwave-Studio with the final results verified using other software. Detailed segment-by-segment and full RFQ frequency calculations were performed and compared to the measured data. The maximum frequency deviation is about 100 kHz. The frequencies of higher-order modes have also been calculated and finally the modulation and tuners effects on both the frequency and field flatness have been studied. We believe that with

  19. Understanding price discovery in interconnected markets: Generalized Langevin process approach and simulation (United States)

    Schenck, Natalya A.; Horvath, Philip A.; Sinha, Amit K.


    While the literature on price discovery process and information flow between dominant and satellite market is exhaustive, most studies have applied an approach that can be traced back to Hasbrouck (1995) or Gonzalo and Granger (1995). In this paper, however, we propose a Generalized Langevin process with asymmetric double-well potential function, with co-integrated time series and interconnected diffusion processes to model the information flow and price discovery process in two, a dominant and a satellite, interconnected markets. A simulated illustration of the model is also provided.

  20. A Simulation Based Approach for Contingency Planning for Aircraft Turnaround Operation System Activities in Airline Hubs (United States)

    Adeleye, Sanya; Chung, Christopher


    Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.

  1. Pulse fracture simulation in shale rock reservoirs: DEM and FEM-DEM approaches (United States)

    González, José Manuel; Zárate, Francisco; Oñate, Eugenio


    In this paper we analyze the capabilities of two numerical techniques based on DEM and FEM-DEM approaches for the simulation of fracture in shale rock caused by a pulse of pressure. We have studied the evolution of fracture in several fracture scenarios related to the initial stress state in the soil or the pressure pulse peak. Fracture length and type of failure have been taken as reference for validating the models. The results obtained show a good approximation to FEM results from the literature.

  2. Economic and simulation approach for renewable natural resources. Ethanol production in the EEC: a case study

    Energy Technology Data Exchange (ETDEWEB)

    Swinnen, J.F.; Jacobs, P.A.; Uytterhoeven, J.B.; Tollens, E.F.


    A simulation model was constructed in order to evaluate the economic feasibility of a number of well-known or hypothetical conversion processes of biomass to a variety of energy carriers and chemical products. The model was constructed in such a way that it allowed a general approach, with possibilities for testing the impact of different hypotheses and scenarios about market evolutions and technological developments on the overall economics of a process. A case study of the Biostil process for ethanol production in the EC shows that large amounts of subsidies are necessary for profitable production with this process. The price of by-products has a great impact on its profitability.

  3. Spectral modeling of magnetohydrodynamic turbulent flows. (United States)

    Baerenzung, J; Politano, H; Ponty, Y; Pouquet, A


    We present a dynamical spectral model for large-eddy simulation of the incompressible magnetohydrodynamic (MHD) equations based on the eddy damped quasinormal Markovian approximation. This model extends classical spectral large-eddy simulations for the Navier-Stokes equations to incorporate general (non-Kolmogorovian) spectra as well as eddy noise. We derive the model for MHD flows and show that the introduction of an eddy damping time for the dynamics of spectral tensors, in the absence of equipartition between the velocity and magnetic fields, leads to better agreement with direct numerical simulations, an important point for dynamo computations.

  4. A new development of the dynamic procedure in large-eddy simulation based on a Finite Volume integral approach. Application to stratified turbulence (United States)

    Denaro, Filippo Maria; de Stefano, Giuliano


    A Finite Volume-based large-eddy simulation method is proposed along with a suitable extension of the dynamic modelling procedure that takes into account for the integral formulation of the governing filtered equations. Discussion about the misleading interpretation of FV in some literature is addressed. Then, the classical Germano identity is congruently rewritten in such a way that the determination of the modelling parameters does not require any arbitrary averaging procedure and thus retains a fully local character. The numerical modelling of stratified turbulence is the specific problem considered in this study, as an archetypal of simple geophysical flows. The original scaling formulation of the dynamic sub-grid scale model proposed by Wong and Lilly (Phys. Fluids 6(6), 1994) is suitably extended to the present integral formulation. This approach is preferred with respect to traditional ones since the eddy coefficients can be independently computed by avoiding the addition of unjustified buoyancy production terms in the constitutive equations. Simple scaling arguments allow us not to use the equilibrium hypothesis according to which the dissipation rate should equal the sub-grid scale energy production. A careful a priori analysis of the relevance of the test filter shape as well as the filter-to-grid ratio is reported. Large-eddy simulation results are a posteriori compared with a reference pseudo-spectral direct numerical solution that is suitably post-filtered in order to have a meaningful comparison. In particular, the spectral distribution of kinetic and thermal energy as well as the viscosity and diffusivity sub-grid scale profiles are illustrated. The good performances of the proposed method, in terms of both evolutions of global quantities and statistics, are very promising for the future development and application of the method.

  5. Evolving Playable Content for Cut the Rope through a Simulation-Based Approach

    DEFF Research Database (Denmark)

    Shaker, Mohammad; Shaker, Noor; Togelius, Julian


    In order to automatically generate high-quality game levels, one needs to be able to automatically verify that the levels are playable. The simulation-based approach to playability testing uses an artificial agent to play through the level, but building such an agent is not always an easy task...... and such an agent is not always readily available. We discuss this prob- lem in the context of the physics-based puzzle game Cut the Rope, which features continuous time and state space, mak- ing several approaches such as exhaustive search and reactive agents inefficient. We show that a deliberative Prolog...... in this paper is likely to be useful for a large variety of games with similar characteristics....


    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Sahil; Wettlaufer, John S. [Program in Applied Mathematics, Yale University, New Haven, CT (United States); Sordo, Fabio Del [Department of Astronomy, Yale University, New Haven, CT (United States)


    Owing to technological advances, the number of exoplanets discovered has risen dramatically in the last few years. However, when trying to observe Earth analogs, it is often difficult to test the veracity of detection. We have developed a new approach to the analysis of exoplanetary spectral observations based on temporal multifractality, which identifies timescales that characterize planetary orbital motion around the host star and those that arise from stellar features such as spots. Without fitting stellar models to spectral data, we show how the planetary signal can be robustly detected from noisy data using noise amplitude as a source of information. For observation of transiting planets, combining this method with simple geometry allows us to relate the timescales obtained to primary and secondary eclipse of the exoplanets. Making use of data obtained with ground-based and space-based observations we have tested our approach on HD 189733b. Moreover, we have investigated the use of this technique in measuring planetary orbital motion via Doppler shift detection. Finally, we have analyzed synthetic spectra obtained using the SOAP 2.0 tool, which simulates a stellar spectrum and the influence of the presence of a planet or a spot on that spectrum over one orbital period. We have demonstrated that, so long as the signal-to-noise-ratio ≥ 75, our approach reconstructs the planetary orbital period, as well as the rotation period of a spot on the stellar surface.

  7. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean


    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  8. A Comparison of Modeling Approaches in Simulating Chlorinated Ethene Removal in a Constructed Wetland by a Microbial Consortia

    National Research Council Canada - National Science Library

    Campbell, Jason


    ... of the modeling approaches affect simulation results. Concepts like microbial growth in the form of a biofilm and spatially varying contaminant concentrations bring the validity of the CSTR assumption into question...

  9. Initial flight and simulator evaluation of a head up display for standard and noise abatement visual approaches (United States)

    Bourquin, K.; Palmer, E. A.; Cooper, G.; Gerdes, R. M.


    A preliminary assessment was made of the adequacy of a simple head up display (HUD) for providing vertical guidance for flying noise abatement and standard visual approaches in a jet transport. The HUD featured gyro-stabilized approach angle scales which display the angle of declination to any point on the ground and a horizontal flight path bar which aids the pilot in his control of the aircraft flight path angle. Thirty-three standard and noise abatement approaches were flown in a Boeing 747 aircraft equipped with a head up display. The HUD was also simulated in a research simulator. The simulator was used to familiarize the pilots with the display and to determine the most suitable way to use the HUD for making high capture noise abatement approaches. Preliminary flight and simulator data are presented and problem areas that require further investigation are identified.

  10. Evaluation of near-wall solution approaches for large-eddy simulations of flow in a centrifugal pump impeller

    Directory of Open Access Journals (Sweden)

    Zhi-Feng Yao


    Full Text Available The turbulent flow in a centrifugal pump impeller is bounded by complex surfaces, including blades, a hub and a shroud. The primary challenge of the flow simulation arises from the generation of a boundary layer between the surface of the impeller and the moving fluid. The principal objective is to evaluate the near-wall solution approaches that are typically used to deal with the flow in the boundary layer for the large-eddy simulation (LES of a centrifugal pump impeller. Three near-wall solution approaches –the wall-function approach, the wall-resolved approach and the hybrid Reynolds averaged Navier–Stoke (RANS and LES approach – are tested. The simulation results are compared with experimental results conducted through particle imaging velocimetry (PIV and laser Doppler velocimetry (LDV. It is found that the wall-function approach is more sparing of computational resources, while the other two approaches have the important advantage of providing highly accurate boundary layer flow prediction. The hybrid RANS/LES approach is suitable for predicting steady-flow features, such as time-averaged velocities and hydraulic losses. Despite the fact that the wall-resolved approach is expensive in terms of computing resources, it exhibits a strong ability to capture a small-scale vortex and predict instantaneous velocity in the near-wall region in the impeller. The wall-resolved approach is thus recommended for the transient simulation of flows in centrifugal pump impellers.

  11. Spectral self-imaging effect by time-domain multilevel phase modulation of a periodic pulse train. (United States)

    Caraquitena, José; Beltrán, Marta; Llorente, Roberto; Martí, Javier; Muriel, Miguel A


    We propose and analyze a novel (to our knowledge) approach to implement the spectral self-imaging effect of optical frequency combs. The technique is based on time-domain multilevel phase-only modulation of a periodic optical pulse train. The method admits both infinite- and finite-duration periodic pulse sequences. We show that the fractional spectral self-imaging effect allows one to reduce by an integer factor the comb frequency spacing. Numerical simulation results support our theoretical analysis.

  12. Spectral self-imaging effect by time-domain multilevel phase modulation of a periodic pulse train


    Caraquitena Sales, José; Beltrán, Marta; Llorente, Roberto; Martí Sendra, Javier; Muriel, Miguel A.


    We propose and analyze a novel (to our knowledge) approach to implement the spectral self-imaging effect of optical frequency combs. The technique is based on time-domain multilevel phase-only modulation of a periodic optical pulse train. The method admits both infinite- and finite-duration periodic pulse sequences. We show that the fractional spectral self-imaging effect allows one to reduce by an integer factor the comb frequency spacing. Numerical simulation results support our theoretical...

  13. Designing a Virtual Simulation Case for Cultural Competence Using a Community-Based Participatory Research Approach: A Puerto Rican Case. (United States)

    Mathew, Lilly; Brewer, Barbara B; Crist, Janice D; Poedel, Robin J

    In this study, a community-based participatory research approach was used for developing content for a virtual simulation case. The virtual simulation case was designed to develop the cultural competence of prelicensure nursing students in caring for a Puerto Rican patient with diabetes. This article presents the method used to establish a Puerto Rican community advisory board to develop content for a virtual simulation case for cultural competency.

  14. An efficient spectral crystal plasticity solver for GPU architectures (United States)

    Malahe, Michael


    We present a spectral crystal plasticity (CP) solver for graphics processing unit (GPU) architectures that achieves a tenfold increase in efficiency over prior GPU solvers. The approach makes use of a database containing a spectral decomposition of CP simulations performed using a conventional iterative solver over a parameter space of crystal orientations and applied velocity gradients. The key improvements in efficiency come from reducing global memory transactions, exposing more instruction-level parallelism, reducing integer instructions and performing fast range reductions on trigonometric arguments. The scheme also makes more efficient use of memory than prior work, allowing for larger problems to be solved on a single GPU. We illustrate these improvements with a simulation of 390 million crystal grains on a consumer-grade GPU, which executes at a rate of 2.72 s per strain step.

  15. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar


    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  16. Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization (United States)

    Xiang, Xingcan; Mutlu, Rahim; Alici, Gursel; Li, Weihua


    Conducting polymer actuators have shown significant potential in articulating micro instruments, manipulation devices, and robotics. However, implementing a feedback control strategy to enhance their positioning ability and accuracy in any application requires a feedback sensor, which is extremely large in size compared to the size of the actuators. Therefore, this paper proposes a new sensorless control scheme without the use of a position feedback sensor. With the help of the system identification technique and particle swarm optimization, the control scheme, which we call the simulated feedback control system, showed a satisfactory command tracking performance for the conducting polymer actuator’s step and dynamic displacement responses, especially under a disturbance, without needing a physical feedback loop, but using a simulated feedback loop. The primary contribution of this study is to propose and experimentally evaluate the simulated feedback control scheme for a class of the conducting polymer actuators known as tri-layer polymer actuators, which can operate both in dry and wet media. This control approach can also be extended to other smart actuators or systems, for which the feedback control based on external sensing is impractical.

  17. Modeling and simulation of bus assem-bling process using DES/ABS approach

    Directory of Open Access Journals (Sweden)



    Full Text Available This paper presents the results of the project, which goal is to analyze the production process capability after reengineering the assembly process due to expansion of a bus production plant. The verification of the designed work organization for the new configuration of workstations on new production hall is necessary. To solve these  problems authors propose a method based on mixing DES (Discrete Event Simulation and ABS (Agent Based Simulation approach. DES is using to model the main process – material flow (buses, ABS is using to model assembling operations of teams of  workers.One of obtained goal is to build a simulation model, which presents the new assembly line in the factory, taking into ac-count the arrangement of workstations and work teams in the new production hall as well as the transport between workstations. Second goal is to present work organization of work teams and division of individual workers’ labor (who belongs to a particular work team and performs operations on buses in a particular workstation in order to determine the best allocation of tasks and the optimum size of individual work teams. Proposed solution enables to determine the effect of assembly interferences on the work of particular work teams and the efficiency of the whole production system, to define the efficiency of the designed assembly lines and proposing changes aimed at the quality improvement of the created conception. 

  18. A simulation approach for analysis of short-term security of natural gas supply in Colombia

    International Nuclear Information System (INIS)

    Villada, Juan; Olaya, Yris


    Achieving security of gas supply implies diversifying gas sources, while having enough supply, transportation, and storage capacity to meet demand peaks and supply interruptions. Devising a strategy for securing gas supply is not straightforward because gas supply depends on complex interactions of production, demand and infrastructure, and it is exposed to economic, regulatory, political, environmental and technical risks. To address this complexity, we propose a simulation approach that replicates the structure of the gas supply chain, including transportation constraints and demand fluctuations. We build and calibrate a computer model for the Colombian gas sector, and run the model to assess the impact of expanding transportation capacity and increasing market flexibility on the security of supply. Our analysis focuses on the operation and planned and proposed expansions of the transportation infrastructure because adequate regulation and development of this infrastructure can contribute to increase the security of supply in the gas sector. We find that proposed import facilities, specifically LNG import terminals at Buenaventura, increase system's security under the current market structure. - Highlights: ► We build a simulation model for analyzing natural gas trade in Colombia. ► The model captures the structure of the gas network and on market rules. ► We simulate investment decisions to increase short-term security of supply. ► Securing supply would need LNG imports and expansion of pipeline capacity.

  19. A numerical integration approach suitable for simulating PWR dynamics using a microcomputer system

    International Nuclear Information System (INIS)

    Zhiwei, L.; Kerlin, T.W.


    It is attractive to use microcomputer systems to simulate nuclear power plant dynamics for the purpose of teaching and/or control system design. An analysis and a comparison of feasibility of existing numerical integration methods have been made. The criteria for choosing the integration step using various numerical integration methods including the matrix exponential method are derived. In order to speed up the simulation, an approach is presented using the Newton recursion calculus which can avoid convergence limitations in choosing the integration step size. The accuracy consideration will dominate the integration step limited. The advantages of this method have been demonstrated through a case study using CBM model 8032 microcomputer to simulate a reduced order linear PWR model under various perturbations. It has been proven theoretically and practically that the Runge-Kutta method and Adams-Moulton method are not feasible. The matrix exponential method is good at accuracy and fairly good at speed. The Newton recursion method can save 3/4 to 4/5 time compared to the matrix exponential method with reasonable accuracy. Vertical Barhis method can be expanded to deal with nonlinear nuclear power plant models and higher order models as well

  20. On the choice of the demand and hydraulic modeling approach to WDN real-time simulation (United States)

    Creaco, Enrico; Pezzinga, Giuseppe; Savic, Dragan


    This paper aims to analyze two demand modeling approaches, i.e., top-down deterministic (TDA) and bottom-up stochastic (BUA), with particular reference to their impact on the hydraulic modeling of water distribution networks (WDNs). In the applications, the hydraulic modeling is carried out through the extended period simulation (EPS) and unsteady flow modeling (UFM). Taking as benchmark the modeling conditions that are closest to the WDN's real operation (UFM + BUA), the analysis showed that the traditional use of EPS + TDA produces large pressure head and water discharge errors, which can be attenuated only when large temporal steps (up to 1 h in the case study) are used inside EPS. The use of EPS + BUA always yields better results. Indeed, EPS + BUA already gives a good approximation of the WDN's real operation when intermediate temporal steps (larger than 2 min in the case study) are used for the simulation. The trade-off between consistency of results and computational burden makes EPS + BUA the most suitable tool for real-time WDN simulation, while benefitting from data acquired through smart meters for the parameterization of demand generation models.