WorldWideScience

Sample records for models calculated single-channel

  1. SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Adams, C. H.

    1976-07-01

    This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center.

  2. SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations

    International Nuclear Information System (INIS)

    Adams, C.H.

    1976-07-01

    This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center

  3. [Compared Markov with fractal models by using single-channel experimental and simulation data].

    Science.gov (United States)

    Lan, Tonghan; Wu, Hongxiu; Lin, Jiarui

    2006-10-01

    The gating mechanical kinetical of ion channels has been modeled as a Markov process. In these models it is assumed that the channel protein has a small number of discrete conformational states and kinetic rate constants connecting these states are constant, the transition rate constants among the states is independent both of time and of the previous channel activity. It is assumed in Liebovitch's fractal model that the channel exists in an infinite number of energy states, consequently, transitions from one conductance state to another would be governed by a continuum of rate constants. In this paper, a statistical comparison is presented of Markov and fractal models of ion channel gating, the analysis is based on single-channel data from ion channel voltage-dependence K+ single channel of neuron cell and simulation data from three-states Markov model.

  4. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  5. Application of the single-channel continuous synthesis method to criticity and power distribution calculations in thermal reactors

    International Nuclear Information System (INIS)

    Medrano Asensio, Gregorio.

    1976-06-01

    A detailed power distribution calculation in a large power reactor requires the solution of the multigroup 3D diffusion equations. Using the finite difference method, this computation is too expensive to be performed for design purposes. This work is devoted to the single channel continous synthesis method: the choice of the trial functions and the determination of the mixing functions are discussed in details; 2D and 3D results are presented. The method is applied to the calculation of the IAEA ''Benchmark'' reactor and the results obtained are compared with a finite element resolution and with published results [fr

  6. Three-dimensional single-channel thermal analysis of fully ceramic microencapsulated fuel via two-temperature homogenized model

    International Nuclear Information System (INIS)

    Lee, Yoonhee; Cho, Nam Zin

    2014-01-01

    Highlights: • Two-temperature homogenized model is applied to thermal analysis of fully ceramic microencapsulated (FCM) fuel. • Based on the results of Monte Carlo calculation, homogenized parameters are obtained. • 2-D FEM/1-D FDM hybrid method for the model is used to obtain 3-D temperature profiles. • The model provides the fuel-kernel and SiC matrix temperatures separately. • Compared to UO 2 fuel, the FCM fuel shows ∼560 K lower maximum temperatures at steady- and transient states. - Abstract: The fully ceramic microencapsulated (FCM) fuel, one of the accident tolerant fuel (ATF) concepts, consists of TRISO particles randomly dispersed in SiC matrix. This high heterogeneity in compositions leads to difficulty in explicit thermal calculation of such a fuel. For thermal analysis of a fuel element of very high temperature reactors (VHTRs) which has a similar configuration to FCM fuel, two-temperature homogenized model was recently proposed by the authors. The model was developed using particle transport Monte Carlo method for heat conduction problems. It gives more realistic temperature profiles, and provides the fuel-kernel and graphite temperatures separately. In this paper, we apply the two-temperature homogenized model to three-dimensional single-channel thermal analysis of the FCM fuel element for steady- and transient-states using 2-D FEM/1-D FDM hybrid method. In the analyses, we assume that the power distribution is uniform in radial direction at steady-state and that in axial direction it is in the form of cosine function for simplicity. As transient scenarios, we consider (i) coolant inlet temperature transient, (ii) inlet mass flow rate transient, and (iii) power transient. The results of analyses are compared to those of conventional UO 2 fuel having the same geometric dimension and operating conditions

  7. New Results on Single-Channel Speech Separation Using Sinusoidal Modeling

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2011-01-01

    We present new results on single-channel speech separation and suggest a new separation approach to improve the speech quality of separated signals from an observed mix- ture. The key idea is to derive a mixture estimator based on sinusoidal parameters. The proposed estimator is aimed at finding...... the proposed method over other methods are confirmed by employing perceptual evaluation of speech quality (PESQ) as an objective measure and a MUSHRA listening test as a subjective evaluation for both speaker-dependent and gender-dependent scenarios....

  8. An accurate mobility model for the I-V characteristics of n-channel enhancement-mode MOSFETs with single-channel boron implantation

    International Nuclear Information System (INIS)

    Chingyuan Wu; Yeongwen Daih

    1985-01-01

    In this paper an analytical mobility model is developed for the I-V characteristics of n-channel enhancement-mode MOSFETs, in which the effects of the two-dimensional electric fields in the surface inversion channel and the parasitic resistances due to contact and interconnection are included. Most importantly, the developed mobility model easily takes the device structure and process into consideration. In order to demonstrate the capabilities of the developed model, the structure- and process-oriented parameters in the present mobility model are calculated explicitly for an n-channel enhancement-mode MOSFET with single-channel boron implantation. Moreover, n-channel MOSFETs with different channel lengths fabricated in a production line by using a set of test keys have been characterized and the measured mobilities have been compared to the model. Excellent agreement has been obtained for all ranges of the fabricated channel lengths, which strongly support the accuracy of the model. (author)

  9. Multiphysics Modeling of a Single Channel in a Nuclear Thermal Propulsion Grooved Ring Fuel Element

    Science.gov (United States)

    Kim, Tony; Emrich, William J., Jr.; Barkett, Laura A.; Mathias, Adam D.; Cassibry, Jason T.

    2013-01-01

    In the past, fuel rods have been used in nuclear propulsion applications. A new fuel element concept that reduces weight and increases efficiency uses a stack of grooved discs. Each fuel element is a flat disc with a hole on the interior and grooves across the top. Many grooved ring fuel elements for use in nuclear thermal propulsion systems have been modeled, and a single flow channel for each design has been analyzed. For increased efficiency, a fuel element with a higher surface-area-to-volume ratio is ideal. When grooves are shallower, i.e., they have a lower surface area, the results show that the exit temperature is higher. By coupling the physics of turbulence with those of heat transfer, the effects on the cooler gas flowing through the grooves of the thermally excited solid can be predicted. Parametric studies were done to show how a pressure drop across the axial length of the channels will affect the exit temperatures of the gas. Geometric optimization was done to show the behaviors that result from the manipulation of various parameters. Temperature profiles of the solid and gas showed that more structural optimization is needed to produce the desired results. Keywords: Nuclear Thermal Propulsion, Fuel Element, Heat Transfer, Computational Fluid Dynamics, Coupled Physics Computations, Finite Element Analysis

  10. Low Complexity Bayesian Single Channel Source Separation

    DEFF Research Database (Denmark)

    Beierholm, Thomas; Pedersen, Brian Dam; Winther, Ole

    2004-01-01

    We propose a simple Bayesian model for performing single channel speech separation using factorized source priors in a sliding window linearly transformed domain. Using a one dimensional mixture of Gaussians to model each band source leads to fast tractable inference for the source signals. Simul...

  11. Visualization Techniques for Single Channel DPF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Heather E.; Maupin, Gary D.; Carlson, Shelley J.; Saenz, Natalio T.; Gallant, Thomas R.

    2007-04-01

    New techniques have been developed to visualize soot deposition in both traditional and new diesel particulate filter (DPF) substrate materials using a modified cyanoacrylate fuming technique. Loading experiments have been conducted on a variety of single channel DPF substrates to develop a deeper understanding of soot penetration, soot deposition characteristics, and to confirm modeling results. Early results indicate that stabilizing the soot layer using a vapor adhesive may allow analysis of the layer with new methods.

  12. Single-channel kinetics of BK (Slo1 channels

    Directory of Open Access Journals (Sweden)

    Yanyan eGeng

    2015-01-01

    Full Text Available Single-channel kinetics has proven a powerful tool to reveal information about the gating mechanisms that control the opening and closing of ion channels. This introductory review focuses on the gating of large conductance Ca2+- and voltage-activated K+ (BK or Slo1 channels at the single-channel level. It starts with single-channel current records and progresses to presentation and analysis of single-channel data and the development of gating mechanisms in terms of discrete state Markov (DSM models. The DSM models are formulated in terms of the tetrameric modular structure of BK channels, consisting of a central transmembrane pore-gate domain (PGD attached to four surrounding transmembrane voltage sensing domains (VSD and a large intracellular cytosolic domain (CTD, also referred to as the gating ring. The modular structure and data analysis shows that the Ca2+ and voltage dependent gating considered separately can each be approximated by 10-state two-tiered models with 5 closed states on the upper tier and 5 open states on the lower tier. The modular structure and joint Ca2+ and voltage dependent gating are consistent with a 50 state two-tiered model with 25 closed states on the upper tier and 25 open states on the lower tier. Adding an additional tier of brief closed (flicker states to the 10-state or 50-state models improved the description of the gating. For fixed experimental conditions a channel would gate in only a subset of the potential number of states. The detected number of states and the correlations between adjacent interval durations are consistent with the tiered models. The examined models can account for the single-channel kinetics and the bursting behavior of gating. Ca2+ and voltage activate BK channels by predominantly increasing the effective opening rate of the channel with a smaller decrease in the effective closing rate. Ca2+ and depolarization thus activate by mainly destabilizing the closed states.

  13. Sinusoidal masks for single channel speech separation

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2010-01-01

    In this paper we present a new approach for binary and soft masks used in single-channel speech separation. We present a novel approach called the sinusoidal mask (binary mask and Wiener filter) in a sinusoidal space. Theoretical analysis is presented for the proposed method, and we show that the......In this paper we present a new approach for binary and soft masks used in single-channel speech separation. We present a novel approach called the sinusoidal mask (binary mask and Wiener filter) in a sinusoidal space. Theoretical analysis is presented for the proposed method, and we show...... that the proposed method is able to minimize the target speech distortion while suppressing the crosstalk to a predetermined threshold. It is observed that compared to the STFTbased masks, the proposed sinusoidal masks improve the separation performance in terms of objective measures (SSNR and PESQ) and are mostly...

  14. Regularity of beating of small clusters of embryonic chick ventricular heart-cells: experiment vs. stochastic single-channel population model.

    Science.gov (United States)

    Krogh-Madsen, Trine; Kold Taylor, Louise; Skriver, Anne D; Schaffer, Peter; Guevara, Michael R

    2017-09-01

    The transmembrane potential is recorded from small isopotential clusters of 2-4 embryonic chick ventricular cells spontaneously generating action potentials. We analyze the cycle-to-cycle fluctuations in the time between successive action potentials (the interbeat interval or IBI). We also convert an existing model of electrical activity in the cluster, which is formulated as a Hodgkin-Huxley-like deterministic system of nonlinear ordinary differential equations describing five individual ionic currents, into a stochastic model consisting of a population of ∼20 000 independently and randomly gating ionic channels, with the randomness being set by a real physical stochastic process (radio static). This stochastic model, implemented using the Clay-DeFelice algorithm, reproduces the fluctuations seen experimentally: e.g., the coefficient of variation (standard deviation/mean) of IBI is 4.3% in the model vs. the 3.9% average value of the 17 clusters studied. The model also replicates all but one of several other quantitative measures of the experimental results, including the power spectrum and correlation integral of the voltage, as well as the histogram, Poincaré plot, serial correlation coefficients, power spectrum, detrended fluctuation analysis, approximate entropy, and sample entropy of IBI. The channel noise from one particular ionic current (I Ks ), which has channel kinetics that are relatively slow compared to that of the other currents, makes the major contribution to the fluctuations in IBI. Reproduction of the experimental coefficient of variation of IBI by adding a Gaussian white noise-current into the deterministic model necessitates using an unrealistically high noise-current amplitude. Indeed, a major implication of the modelling results is that, given the wide range of time-scales over which the various species of channels open and close, only a cell-specific stochastic model that is formulated taking into consideration the widely different

  15. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    Science.gov (United States)

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Ventricular beat detection in single channel electrocardiograms.

    Science.gov (United States)

    Dotsinsky, Ivan A; Stoyanov, Todor V

    2004-01-29

    Detection of QRS complexes and other types of ventricular beats is a basic component of ECG analysis. Many algorithms have been proposed and used because of the waves' shape diversity. Detection in a single channel ECG is important for several applications, such as in defibrillators and specialized monitors. The developed heuristic algorithm for ventricular beat detection includes two main criteria. The first of them is based on steep edges and sharp peaks evaluation and classifies normal QRS complexes in real time. The second criterion identifies ectopic beats by occurrence of biphasic wave. It is modified to work with a delay of one RR interval in case of long RR intervals. Other algorithm branches classify already detected QRS complexes as ectopic beats if a set of wave parameters is encountered or the ratio of latest two RR intervals RRi-1/RRi is less than 1:2.5. The algorithm was tested with the AHA and MIT-BIH databases. A sensitivity of 99.04% and a specificity of 99.62% were obtained in detection of 542014 beats. The algorithm copes successfully with different complicated cases of single channel ventricular beat detection. It is aimed to simulate to some extent the experience of the cardiologist, rather than to rely on mathematical approaches adopted from the theory of signal analysis. The algorithm is open to improvement, especially in the part concerning the discrimination between normal QRS complexes and ectopic beats.

  17. Achieving single channel, full duplex wireless communication

    KAUST Repository

    Choi, Jung Il

    2010-01-01

    This paper discusses the design of a single channel full-duplex wireless transceiver. The design uses a combination of RF and baseband techniques to achieve full-duplexing with minimal effect on link reliability. Experiments on real nodes show the full-duplex prototype achieves median performance that is within 8% of an ideal full-duplexing system. This paper presents Antenna Cancellation, a novel technique for self-interference cancellation. In conjunction with existing RF interference cancellation and digital baseband interference cancellation, antenna cancellation achieves the amount of self-interference cancellation required for full-duplex operation. The paper also discusses potential MAC and network gains with full-duplexing. It suggests ways in which a full-duplex system can solve some important problems with existing wireless systems including hidden terminals, loss of throughput due to congestion, and large end-to-end delays. Copyright 2010 ACM.

  18. ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL

    Science.gov (United States)

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  19. Single channel analysis of membrane proteins in artificial bilayer membranes.

    Science.gov (United States)

    Bartsch, Philipp; Harsman, Anke; Wagner, Richard

    2013-01-01

    The planar lipid bilayer technique is a powerful experimental approach for electrical single channel recordings of pore-forming membrane proteins in a chemically well-defined and easily modifiable environment. Here we provide a general survey of the basic materials and procedures required to set up a robust bilayer system and perform electrophysiological single channel recordings of reconstituted proteins suitable for the in-depth characterization of their functional properties.

  20. Automatic detection and classification of artifacts in single-channel EEG

    DEFF Research Database (Denmark)

    Olund, Thomas; Duun-Henriksen, Jonas; Kjaer, Troels W.

    2014-01-01

    artifact classes using the selected features. Single-channel (Fp1-F7) EEG recordings are obtained from experiments with 12 healthy subjects performing artifact inducing movements. The dataset was used to construct and validate the model. Both subject-specific and generic implementation, are investigated...

  1. Mimicking multichannel scattering with single-channel approaches

    Science.gov (United States)

    Grishkevich, Sergey; Schneider, Philipp-Immanuel; Vanne, Yulian V.; Saenz, Alejandro

    2010-02-01

    The collision of two atoms is an intrinsic multichannel (MC) problem, as becomes especially obvious in the presence of Feshbach resonances. Due to its complexity, however, single-channel (SC) approximations, which reproduce the long-range behavior of the open channel, are often applied in calculations. In this work the complete MC problem is solved numerically for the magnetic Feshbach resonances (MFRs) in collisions between generic ultracold Li6 and Rb87 atoms in the ground state and in the presence of a static magnetic field B. The obtained MC solutions are used to test various existing as well as presently developed SC approaches. It was found that many aspects even at short internuclear distances are qualitatively well reflected. This can be used to investigate molecular processes in the presence of an external trap or in many-body systems that can be feasibly treated only within the framework of the SC approximation. The applicability of various SC approximations is tested for a transition to the absolute vibrational ground state around an MFR. The conformance of the SC approaches is explained by the two-channel approximation for the MFR.

  2. Mimicking multichannel scattering with single-channel approaches

    International Nuclear Information System (INIS)

    Grishkevich, Sergey; Schneider, Philipp-Immanuel; Vanne, Yulian V.; Saenz, Alejandro

    2010-01-01

    The collision of two atoms is an intrinsic multichannel (MC) problem, as becomes especially obvious in the presence of Feshbach resonances. Due to its complexity, however, single-channel (SC) approximations, which reproduce the long-range behavior of the open channel, are often applied in calculations. In this work the complete MC problem is solved numerically for the magnetic Feshbach resonances (MFRs) in collisions between generic ultracold 6 Li and 87 Rb atoms in the ground state and in the presence of a static magnetic field B. The obtained MC solutions are used to test various existing as well as presently developed SC approaches. It was found that many aspects even at short internuclear distances are qualitatively well reflected. This can be used to investigate molecular processes in the presence of an external trap or in many-body systems that can be feasibly treated only within the framework of the SC approximation. The applicability of various SC approximations is tested for a transition to the absolute vibrational ground state around an MFR. The conformance of the SC approaches is explained by the two-channel approximation for the MFR.

  3. Emotion classification using single-channel scalp-EEG recording.

    Science.gov (United States)

    Jalilifard, Amir; Brigante Pizzolato, Ednaldo; Kafiul Islam, Md

    2016-08-01

    Several studies have found evidence for corticolimbic Theta electroencephalographic (EEG) oscillation in the neural processing of visual stimuli perceived as fear or threatening scene. Recent studies showed that neural oscillations' patterns in Theta, Alpha, Beta and Gamma sub-bands play a main role in brain's emotional processing. The main goal of this study is to classify two different emotional states by means of EEG data recorded through a single-electrode EEG headset. Nineteen young subjects participated in an EEG experiment while watching a video clip that evoked three emotional states: neutral, relaxation and scary. Following each video clip, participants were asked to report on their subjective affect by giving a score between 0 to 10. First, recorded EEG data were preprocessed by stationary wavelet transform (SWT) based denoising to remove artifacts. Afterward, the distribution of power in time-frequency space was obtained using short-time Fourier transform (STFT) and then, the mean value of energy was calculated for each EEG sub-band. Finally, 46 features, as the mean energy of frequency bands between 4 and 50 Hz, containing 689 instances - for each subject -were collected in order to classify the emotional states. Our experimental results show that EEG dynamics induced by horror and relaxing movies can be classified with average classification rate of 92% using support vector machine (SVM) classifier. We also compared the performance of SVM to K-nearest neighbors (K-NN). The results show that K-NN achieves a better classification rate by 94% accuracy. The findings of this work are expected to pave the way to a new horizon in neuroscience by proving the point that only single-channel EEG data carry enough information for emotion classification.

  4. Monitoring single-channel water permeability in polarized cells.

    Science.gov (United States)

    Erokhova, Liudmila; Horner, Andreas; Kügler, Philipp; Pohl, Peter

    2011-11-18

    So far the determination of unitary permeability (p(f)) of water channels that are expressed in polarized cells is subject to large errors because the opening of a single water channel does not noticeably increase the water permeability of a membrane patch above the background. That is, in contrast to the patch clamp technique, where the single ion channel conductance may be derived from a single experiment, two experiments separated in time and/or space are required to obtain the single-channel water permeability p(f) as a function of the incremental water permeability (P(f,c)) and the number (n) of water channels that contributed to P(f,c). Although the unitary conductance of ion channels is measured in the native environment of the channel, p(f) is so far derived from reconstituted channels or channels expressed in oocytes. To determine the p(f) of channels from live epithelial monolayers, we exploit the fact that osmotic volume flow alters the concentration of aqueous reporter dyes adjacent to the epithelia. We measure these changes by fluorescence correlation spectroscopy, which allows the calculation of both P(f,c) and osmolyte dilution within the unstirred layer. Shifting the focus of the laser from the aqueous solution to the apical and basolateral membranes allowed the FCS-based determination of n. Here we validate the new technique by determining the p(f) of aquaporin 5 in Madin-Darby canine kidney cell monolayers. Because inhibition and subsequent activity rescue are monitored on the same sample, drug effects on exocytosis or endocytosis can be dissected from those on p(f).

  5. Calcium signals driven by single channel noise.

    Directory of Open Access Journals (Sweden)

    Alexander Skupin

    Full Text Available Usually, the occurrence of random cell behavior is appointed to small copy numbers of molecules involved in the stochastic process. Recently, we demonstrated for a variety of cell types that intracellular Ca2+ oscillations are sequences of random spikes despite the involvement of many molecules in spike generation. This randomness arises from the stochastic state transitions of individual Ca2+ release channels and does not average out due to the existence of steep concentration gradients. The system is hierarchical due to the structural levels channel--channel cluster--cell and a corresponding strength of coupling. Concentration gradients introduce microdomains which couple channels of a cluster strongly. But they couple clusters only weakly; too weak to establish deterministic behavior on cell level. Here, we present a multi-scale modelling concept for stochastic hierarchical systems. It simulates active molecules individually as Markov chains and their coupling by deterministic diffusion. Thus, we are able to follow the consequences of random single molecule state changes up to the signal on cell level. To demonstrate the potential of the method, we simulate a variety of experiments. Comparisons of simulated and experimental data of spontaneous oscillations in astrocytes emphasize the role of spatial concentration gradients in Ca2+ signalling. Analysis of extensive simulations indicates that frequency encoding described by the relation between average and standard deviation of interspike intervals is surprisingly robust. This robustness is a property of the random spiking mechanism and not a result of control.

  6. Calcium signals driven by single channel noise.

    Science.gov (United States)

    Skupin, Alexander; Kettenmann, Helmut; Falcke, Martin

    2010-08-05

    Usually, the occurrence of random cell behavior is appointed to small copy numbers of molecules involved in the stochastic process. Recently, we demonstrated for a variety of cell types that intracellular Ca2+ oscillations are sequences of random spikes despite the involvement of many molecules in spike generation. This randomness arises from the stochastic state transitions of individual Ca2+ release channels and does not average out due to the existence of steep concentration gradients. The system is hierarchical due to the structural levels channel--channel cluster--cell and a corresponding strength of coupling. Concentration gradients introduce microdomains which couple channels of a cluster strongly. But they couple clusters only weakly; too weak to establish deterministic behavior on cell level. Here, we present a multi-scale modelling concept for stochastic hierarchical systems. It simulates active molecules individually as Markov chains and their coupling by deterministic diffusion. Thus, we are able to follow the consequences of random single molecule state changes up to the signal on cell level. To demonstrate the potential of the method, we simulate a variety of experiments. Comparisons of simulated and experimental data of spontaneous oscillations in astrocytes emphasize the role of spatial concentration gradients in Ca2+ signalling. Analysis of extensive simulations indicates that frequency encoding described by the relation between average and standard deviation of interspike intervals is surprisingly robust. This robustness is a property of the random spiking mechanism and not a result of control.

  7. Single-Channel Blind Estimation of Reverberation Parameters

    DEFF Research Database (Denmark)

    Doire, C.S.J.; Brookes, M. D.; Naylor, P. A.

    2015-01-01

    The reverberation of an acoustic channel can be characterised by two frequency-dependent parameters: the reverberation time and the direct-to-reverberant energy ratio. This paper presents an algorithm for blindly determining these parameters from a single-channel speech signal. The algorithm uses...

  8. Single channel in-line multimodal digital holography.

    Science.gov (United States)

    Rivenson, Yair; Katz, Barak; Kelner, Roy; Rosen, Joseph

    2013-11-15

    We present a new single channel in-line setup for holographic recording that can properly record various objects that cannot be recorded by the Gabor holographic method. This configuration allows the recording of holograms based on several modalities while addressing important issues of the original Gabor setup, including the well-known twin-image problem and the weak scattering condition.

  9. Joint Single-Channel Speech Separation and Speaker Identification

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Tan, Zheng-Hua

    2010-01-01

    In this paper, we propose a closed loop system to improve the performance of single-channel speech separation in a speaker independent scenario. The system is composed of two interconnected blocks: a separation block and a speaker identiſcation block. The improvement is accomplished by incorporat......In this paper, we propose a closed loop system to improve the performance of single-channel speech separation in a speaker independent scenario. The system is composed of two interconnected blocks: a separation block and a speaker identiſcation block. The improvement is accomplished...... by incorporating the speaker identities found by the speaker identiſcation block as additional information for the separation block, which converts the speaker-independent separation problem to a speaker-dependent one where the speaker codebooks are known. Simulation results show that the closed loop system...

  10. Single-channel blind separation using pseudo-stereo mixture and complex 2-D histogram.

    Science.gov (United States)

    Tengtrairat, N; Gao, Bin; Woo, W L; Dlay, S S

    2013-11-01

    A novel single-channel blind source separation (SCBSS) algorithm is presented. The proposed algorithm yields at least three benefits of the SCBSS solution: 1) resemblance of a stereo signal concept given by one microphone; 2) independent of initialization and a priori knowledge of the sources; and 3) it does not require iterative optimization. The separation process consists of two steps: 1) estimation of source characteristics, where the source signals are modeled by the autoregressive process and 2) construction of masks using only the single-channel mixture. A new pseudo-stereo mixture is formulated by weighting and time-shifting the original single-channel mixture. This creates an artificial mixing system whose parameters will be estimated through our proposed weighted complex 2-D histogram. In this paper, we derive the separability of the proposed mixture model. Conditions required for unique mask construction based on maximum likelihood are also identified. Finally, experimental testing on both synthetic and real-audio sources is conducted to verify that the proposed algorithm yields superior performance and is computationally very fast compared with existing methods.

  11. Evaluation of an automated single-channel sleep staging algorithm

    OpenAIRE

    Kaplan, Richard; Wang,Ying; Loparo,Kenneth; Kelly,Monica

    2015-01-01

    Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard F Kaplan1 1General Sleep Corporation, Euclid, OH, 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: We previously published the performance evaluation of an automated electroencephalography (EEG)-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® s...

  12. The incidence of the different sources of noise on the uncertainty in radiochromic film dosimetry using single channel and multichannel methods

    Science.gov (United States)

    González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    2017-11-01

    The influence of the various sources of noise on the uncertainty in radiochromic film (RCF) dosimetry using single channel and multichannel methods is investigated in this work. These sources of noise are extracted from pixel value (PV) readings and dose maps. Pieces of an RCF were each irradiated to different uniform doses, ranging from 0 to 1092 cGy. Then, the pieces were read at two resolutions (72 and 150 ppp) with two flatbed scanners: Epson 10000XL and Epson V800, representing two states of technology. Noise was extracted as described in ISO 15739 (2013), separating its distinct constituents: random noise and fixed pattern (FP) noise. Regarding the PV maps, FP noise is the main source of noise for both models of digitizer. Also, the standard deviation of the random noise in the 10000XL model is almost twice that of the V800 model. In the dose maps, the FP noise is smaller in the multichannel method than in the single channel ones. However, random noise is higher in this method, throughout the dose range. In the multichannel method, FP noise is reduced, as a consequence of this method’s ability to eliminate channel independent perturbations. However, the random noise increases, because the dose is calculated as a linear combination of the doses obtained by the single channel methods. The values of the coefficients of this linear combination are obtained in the present study, and the root of the sum of their squares is shown to range between 0.9 and 1.9 over the dose range studied. These results indicate the random noise to play a fundamental role in the uncertainty of RCF dosimetry: low levels of random noise are required in the digitizer to fully exploit the advantages of the multichannel dosimetry method. This is particularly important for measuring high doses at high spatial resolutions.

  13. A Timing Single Channel Analyzer with pileup rejection

    International Nuclear Information System (INIS)

    Lauch, J.; Nachbar, H.U.

    1981-07-01

    A Timing Single Channel Analyzer is described as normally used in nuclear physics applications for measuring certain ranges of energy spectra. The unit accepts unipolar or bipolar gaussian shaped or rectangular pulses and includes a special pileup rejection circuit. Because of its good timing performance high resolution timing and coincidence measurements are possible. The differential analyzer, trigger and timing modes and the function of external strobe and gate signals are explained. Parts of the circuit are illustrated by help of block diagrams and pulse schematics. An essential part of the unit is the pileup rejection circuit. Following theoretical reflections the circuit is described and some measurement results are reported. (orig.) [de

  14. Effectiveness of diaphragmatic stimulation with single-channel electrodes in rabbits

    Directory of Open Access Journals (Sweden)

    Rodrigo Guellner Ghedini

    2013-06-01

    Full Text Available Every year, a large number of individuals become dependent on mechanical ventilation because of a loss of diaphragm function. The most common causes are cervical spinal trauma and neuromuscular diseases. We have developed an experimental model to evaluate the performance of electrical stimulation of the diaphragm in rabbits using single-channel electrodes implanted directly into the muscle. Various current intensities (10, 16, 20, and 26 mA produced tidal volumes above the baseline value, showing that this model is effective for the study of diaphragm performance at different levels of electrical stimulation

  15. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  16. Single-channel stereoscopic ophthalmology microscope based on TRD

    Science.gov (United States)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    A stereoscopic imaging modality was developed for the application of ophthalmology surgical microscopes. A previous study has already introduced a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (SSVIM-TRD), in which two different view angles, image disparity, are generated by imaging through a transparent rotating deflector (TRD) mounted on a stepping motor and is placed in a lens system. In this case, the image disparity is a function of the refractive index and the rotation angle of TRD. Real-time single-channel stereoscopic ophthalmology microscope (SSOM) based on the TRD is improved by real-time controlling and programming, imaging speed, and illumination method. Image quality assessments were performed to investigate images quality and stability during the TRD operation. Results presented little significant difference in image quality in terms of stability of structural similarity (SSIM). A subjective analysis was performed with 15 blinded observers to evaluate the depth perception improvement and presented significant improvement in the depth perception capability. Along with all evaluation results, preliminary results of rabbit eye imaging presented that the SSOM could be utilized as an ophthalmic operating microscopes to overcome some of the limitations of conventional ones.

  17. Carbon cycle modeling calculations for the IPCC

    International Nuclear Information System (INIS)

    Wuebbles, D.J.; Jain, A.K.

    1993-01-01

    We carried out essentially all the carbon cycle modeling calculations that were required by the IPCC Working Group 1. Specifically, IPCC required two types of calculations, namely, ''inverse calculations'' (input was CO 2 concentrations and the output was CO 2 emissions), and the ''forward calculations'' (input was CO 2 emissions and output was CO 2 concentrations). In particular, we have derived carbon dioxide concentrations and/or emissions for several scenarios using our coupled climate-carbon cycle modelling system

  18. A New MEMS Gyroscope Used for Single-Channel Damping.

    Science.gov (United States)

    Zhang, Zengping; Zhang, Wei; Zhang, Fuxue; Wang, Biao

    2015-04-30

    The silicon micromechanical gyroscope, which will be introduced in this paper, represents a novel MEMS gyroscope concept. It is used for the damping of a single-channel control system of rotating aircraft. It differs from common MEMS gyroscopes in that does not have a drive structure, itself, and only has a sense structure. It is installed on a rotating aircraft, and utilizes the aircraft spin to make its sensing element obtain angular momentum. When the aircraft is subjected to an angular rotation, a periodic Coriolis force is induced in the direction orthogonal to both the angular momentum and the angular velocity input axis. This novel MEMS gyroscope can thus sense angular velocity inputs. The output sensing signal is exactly an amplitude-modulation signal. Its envelope is proportional to the input angular velocity, and the carrier frequency corresponds to the spin frequency of the rotating aircraft, so the MEMS gyroscope can not only sense the transverse angular rotation of an aircraft, but also automatically change the carrier frequency over the change of spin frequency, making it very suitable for the damping of a single-channel control system of a rotating aircraft. In this paper, the motion equation of the MEMS gyroscope has been derived. Then, an analysis has been carried to solve the motion equation and dynamic parameters. Finally, an experimental validation has been done based on a precision three axis rate table. The correlation coefficients between the tested data and the theoretical values are 0.9969, 0.9872 and 0.9842, respectively. These results demonstrate that both the design and sensing mechanism are correct.

  19. Acute single channel EEG predictors of cognitive function after stroke.

    Directory of Open Access Journals (Sweden)

    Anna Aminov

    Full Text Available Early and accurate identification of factors that predict post-stroke cognitive outcome is important to set realistic targets for rehabilitation and to guide patients and their families accordingly. However, behavioral measures of cognition are difficult to obtain in the acute phase of recovery due to clinical factors (e.g. fatigue and functional barriers (e.g. language deficits. The aim of the current study was to test whether single channel wireless EEG data obtained acutely following stroke could predict longer-term cognitive function.Resting state Relative Power (RP of delta, theta, alpha, beta, delta/alpha ratio (DAR, and delta/theta ratio (DTR were obtained from a single electrode over FP1 in 24 participants within 72 hours of a first-ever stroke. The Montreal Cognitive Assessment (MoCA was administered at 90-days post-stroke. Correlation and regression analyses were completed to identify relationships between 90-day cognitive function and electrophysiological data, neurological status, and demographic characteristics at admission.Four acute qEEG indices demonstrated moderate to high correlations with 90-day MoCA scores: DTR (r = -0.57, p = 0.01, RP theta (r = 0.50, p = 0.01, RP delta (r = -0.47, p = 0.02, and DAR (r = -0.45, p = 0.03. Acute DTR (b = -0.36, p < 0.05 and stroke severity on admission (b = -0.63, p < 0.01 were the best linear combination of predictors of MoCA scores 90-days post-stroke, accounting for 75% of variance.Data generated by a single pre-frontal electrode support the prognostic value of acute DAR, and identify DTR as a potential marker of post-stroke cognitive outcome. Use of single channel recording in an acute clinical setting may provide an efficient and valid predictor of cognitive function after stroke.

  20. Improved single-channel speech separation using sinusoidal modeling

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2010-01-01

    ) and Wiener filter (softmask) approaches, the proposed approach works independently of pitch estimates. Furthermore, it is observed that it can achieve acceptable perceptual speech quality with less cross-talk at different signal-tosignal ratios while bringing down the complexity by replacing STFT...

  1. Evaluation of an automated single-channel sleep staging algorithm

    Science.gov (United States)

    Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Kaplan, Richard F

    2015-01-01

    Background We previously published the performance evaluation of an automated electroencephalography (EEG)-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® sleep monitoring system. The objective of this paper is to evaluate the performance of a new algorithm called Z-PLUS, which further differentiates sleep as detected by Z-ALG into Light Sleep, Deep Sleep, and Rapid Eye Movement (REM) Sleep, against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Single night, in-lab PSG recordings from 99 subjects (52F/47M, 18–60 years, median age 32.7 years), including both normal sleepers and those reporting a variety of sleep complaints consistent with chronic insomnia, sleep apnea, and restless leg syndrome, as well as those taking selective serotonin reuptake inhibitor/serotonin–norepinephrine reuptake inhibitor antidepressant medications, previously evaluated using Z-ALG were re-examined using Z-PLUS. EEG data collected from electrodes placed at the differential-mastoids (A1–A2) were processed by Z-ALG to determine wake and sleep, then those epochs detected as sleep were further processed by Z-PLUS to differentiate into Light Sleep, Deep Sleep, and REM. EEG data were visually scored by multiple certified polysomnographic technologists according to the Rechtschaffen and Kales criterion, and then combined using a majority-voting rule to create a PSG Consensus score file for each of the 99 subjects. Z-PLUS output was compared to the PSG Consensus score files for both epoch-by-epoch (eg, sensitivity, specificity, and kappa) and sleep stage-related statistics (eg, Latency to Deep Sleep, Latency to REM, Total Deep Sleep, and Total REM). Results Sensitivities of Z-PLUS compared to the PSG Consensus were 0.84 for Light Sleep, 0.74 for Deep Sleep, and 0.72 for REM. Similarly, positive predictive values were 0.85 for Light Sleep, 0.78 for Deep Sleep, and 0.73 for REM. Overall, kappa agreement of 0

  2. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  3. A revised calculational model for fission

    International Nuclear Information System (INIS)

    Atchison, F.

    1998-09-01

    A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)

  4. On nonlinearly-induced noise in single-channel optical links with digital backpropagation.

    Science.gov (United States)

    Beygi, Lotfollah; Irukulapati, Naga V; Agrell, Erik; Johannisson, Pontus; Karlsson, Magnus; Wymeersch, Henk; Serena, Paolo; Bononi, Alberto

    2013-11-04

    In this paper, we investigate the performance limits of electronic chromatic dispersion compensation (EDC) and digital backpropagation (DBP) for a single-channel non-dispersion-managed fiber-optical link. A known analytical method to derive the performance of the system with EDC is extended to derive a first-order approximation for the performance of the system with DBP. In contrast to the cubic growth of the variance of the nonlinear noise-like interference, often called nonlinear noise, with input power for EDC, a quadratic growth is observed with DBP using this approximation. Finally, we provide numerical results to verify the accuracy of the proposed approach and compare it with existing analytical models.

  5. A Joint Approach for Single-Channel Speaker Identification and Speech Separation

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll

    2012-01-01

    a sinusoidal model-based algorithm. The speech separation algorithm consists of a double-talk/single-talk detector followed by a minimum mean square error estimator of sinusoidal parameters for finding optimal codevectors from pre-trained speaker codebooks. In evaluating the proposed system, we start from......In this paper, we present a novel system for joint speaker identification and speech separation. For speaker identification a single-channel speaker identification algorithm is proposed which provides an estimate of signal-to-signal ratio (SSR) as a by-product. For speech separation, we propose...... a situation where we have prior information of codebook indices, speaker identities and SSR-level, and then, by relaxing these assumptions one by one, we demonstrate the efficiency of the proposed fully blind system. In contrast to previous studies that mostly focus on automatic speech recognition (ASR...

  6. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech

    Science.gov (United States)

    Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas

    2017-06-01

    Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n  =  7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.

  7. Distributed sensing: multiple capacitive stretch sensors on a single channel

    Science.gov (United States)

    Tairych, Andreas; Anderson, Iain A.

    2017-04-01

    "Soft, stretchable, and unobtrusive". These are some of the attributes frequently associated with capacitive dielectric elastomer (DE) sensors for body motion capture. While the sensors themselves are soft and elastic, they require rigid peripheral components for capacitance measurement. Each sensor is connected to a separate channel on the sensing circuitry through its own set of wires. In wearable applications with large numbers of sensors, this can lead to a considerable circuit board footprint, and cumbersome wiring. The additional equipment can obstruct movement and alter user behaviour. Previous work has demonstrated how a transmission line model can be applied to localise deformation on a single DE sensor. Building on this approach, we have developed a distributed sensing method by arranging capacitive DE sensors and external resistors to form a transmission line, which is connected to a single sensing channel with only one set of wires. The sensors are made from conductive fabric electrodes, and silicone dielectrics, and the external resistors are off-the-shelf metal film resistors. Excitation voltages with different frequencies are applied to the transmission line. The lumped transmission line capacitances at these frequencies are passed on to a mathematical model that calculates individual sensor capacitance changes. The prototype developed for this study is capable of obtaining separate readings for simultaneously stretched sensors.

  8. Model calculations in correlated finite nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Guardiola, R.; Ros, J. (Granada Univ. (Spain). Dept. de Fisica Nuclear); Polls, A. (Tuebingen Univ. (Germany, F.R.). Inst. fuer Theoretische Physik)

    1980-10-21

    In order to study the convergence condition of the FAHT cluster expansion several model calculations are described and numerically tested. It is concluded that this cluster expansion deals properly with the central part of the two-body distribution function, but presents some difficulties for the exchange part.

  9. EARTHWORK VOLUME CALCULATION FROM DIGITAL TERRAIN MODELS

    Directory of Open Access Journals (Sweden)

    JANIĆ Milorad

    2015-06-01

    Full Text Available Accurate calculation of cut and fill volume has an essential importance in many fields. This article shows a new method, which has no approximation, based on Digital Terrain Models. A relatively new mathematical model is developed for that purpose, which is implemented in the software solution. Both of them has been tested and verified in the praxis on several large opencast mines. This application is developed in AutoLISP programming language and works in AutoCAD environment.

  10. ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL

    Science.gov (United States)

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  11. Classification of four-class motor imagery employing single-channel electroencephalography.

    Directory of Open Access Journals (Sweden)

    Sheng Ge

    Full Text Available With advances in brain-computer interface (BCI research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.

  12. Sleep Apnoea Detection in Single Channel ECGs by Analyzing Heart Rate Dynamics

    National Research Council Canada - National Science Library

    Zywietz, C

    2001-01-01

    .... Sleep disorders are typically investigated by means of polysomnographic recordings. We have analyzed 70 eight-hour single-channel ECG recordings to find out to which extent sleep apneas may be detected from the ECG alone...

  13. An approach to emotion recognition in single-channel EEG signals: a mother child interaction

    Science.gov (United States)

    Gómez, A.; Quintero, L.; López, N.; Castro, J.

    2016-04-01

    In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness.

  14. Matrix model calculations beyond the spherical limit

    International Nuclear Information System (INIS)

    Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.

    1993-01-01

    We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)

  15. Rip currents and alongshore flows in single channels dredged in the surf zone

    Science.gov (United States)

    Moulton, Melissa; Elgar, Steve; Raubenheimer, Britt; Warner, John C.; Kumar, Nirnimesh

    2017-01-01

    To investigate the dynamics of flows near nonuniform bathymetry, single channels (on average 30 m wide and 1.5 m deep) were dredged across the surf zone at five different times, and the subsequent evolution of currents and morphology was observed for a range of wave and tidal conditions. In addition, circulation was simulated with the numerical modeling system COAWST, initialized with the observed incident waves and channel bathymetry, and with an extended set of wave conditions and channel geometries. The simulated flows are consistent with alongshore flows and rip-current circulation patterns observed in the surf zone. Near the offshore-directed flows that develop in the channel, the dominant terms in modeled momentum balances are wave-breaking accelerations, pressure gradients, advection, and the vortex force. The balances vary spatially, and are sensitive to wave conditions and the channel geometry. The observed and modeled maximum offshore-directed flow speeds are correlated with a parameter based on the alongshore gradient in breaking-wave-driven-setup across the nonuniform bathymetry (a function of wave height and angle, water depths in the channel and on the sandbar, and a breaking threshold) and the breaking-wave-driven alongshore flow speed. The offshore-directed flow speed increases with dissipation on the bar and reaches a maximum (when the surf zone is saturated) set by the vertical scale of the bathymetric variability.

  16. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  17. Shell model calculations for exotic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.A. (Michigan State Univ., East Lansing, MI (USA)); Warburton, E.K. (Brookhaven National Lab., Upton, NY (USA)); Wildenthal, B.H. (New Mexico Univ., Albuquerque, NM (USA). Dept. of Physics and Astronomy)

    1990-02-01

    In this paper we review the progress of the shell-model approach to understanding the properties of light exotic nuclei (A < 40). By shell-model'' we mean the consistent and large-scale application of the classic methods discussed, for example, in the book of de-Shalit and Talmi. Modern calculations incorporate as many of the important configurations as possible and make use of realistic effective interactions for the valence nucleons. Properties such as the nuclear densities depend on the mean-field potential, which is usually separately from the valence interaction. We will discuss results for radii which are based on a standard Hartree-Fock approach with Skyrme-type interactions.

  18. Effective hamiltonian calculations using incomplete model spaces

    International Nuclear Information System (INIS)

    Koch, S.; Mukherjee, D.

    1987-01-01

    It appears that the danger of encountering ''intruder states'' is substantially reduced if an effective hamiltonian formalism is developed for incomplete model spaces (IMS). In a Fock-space approach, the proof a ''connected diagram theorem'' is fairly straightforward with exponential-type of ansatze for the wave-operator W, provided the normalization chosen for W is separable. Operationally, one just needs a suitable categorization of the Fock-space operators into ''diagonal'' and ''non-diagonal'' parts that is generalization of the corresponding procedure for the complete model space. The formalism is applied to prototypical 2-electron systems. The calculations have been performed on the Cyber 205 super-computer. The authors paid special attention to an efficient vectorization for the construction and solution of the resulting coupled non-linear equations

  19. Additive Manufacturing Thermal Performance Testing of Single Channel GRCop-84 SLM Components

    Science.gov (United States)

    Garcia, Chance P.; Cross, Matthew

    2014-01-01

    The surface finish found on components manufactured by sinter laser manufacturing (SLM) is rougher (0.013 - 0.0006 inches) than parts made using traditional fabrication methods. Internal features and passages built into SLM components do not readily allow for roughness reduction processes. Alternatively, engineering literature suggests that the roughness of a surface can enhance thermal performance within a pressure drop regime. To further investigate the thermal performance of SLM fabricated pieces, several GRCop-84 SLM single channel components were tested using a thermal conduction rig at MSFC. A 20 kW power source running at 25% duty cycle and 25% power level applied heat to each component while varying water flow rates between 2.1 - 6.2 gallons/min (GPM) at a supply pressure of 550 to 700 psi. Each test was allowed to reach quasi-steady state conditions where pressure, temperature, and thermal imaging data were recorded. Presented in this work are the heat transfer responses compared to a traditional machined OHFC Copper test section. An analytical thermal model was constructed to anchor theoretical models with the empirical data.

  20. An Improved Single-Channel Method to Retrieve Land Surface Temperature from the Landsat-8 Thermal Band

    Directory of Open Access Journals (Sweden)

    Jordi Cristóbal

    2018-03-01

    Full Text Available Land surface temperature (LST is one of the sources of input data for modeling land surface processes. The Landsat satellite series is the only operational mission with more than 30 years of archived thermal infrared imagery from which we can retrieve LST. Unfortunately, stray light artifacts were observed in Landsat-8 TIRS data, mostly affecting Band 11, currently making the split-window technique impractical for retrieving surface temperature without requiring atmospheric data. In this study, a single-channel methodology to retrieve surface temperature from Landsat TM and ETM+ was improved to retrieve LST from Landsat-8 TIRS Band 10 using near-surface air temperature (Ta and integrated atmospheric column water vapor (w as input data. This improved methodology was parameterized and successfully evaluated with simulated data from a global and robust radiosonde database and validated with in situ data from four flux tower sites under different types of vegetation and snow cover in 44 Landsat-8 scenes. Evaluation results using simulated data showed that the inclusion of Ta together with w within a single-channel scheme improves LST retrieval, yielding lower errors and less bias than models based only on w. The new proposed LST retrieval model, developed with both w and Ta, yielded overall errors on the order of 1 K and a bias of −0.5 K validated against in situ data, providing a better performance than other models parameterized using w and Ta or only w models that yielded higher error and bias.

  1. Linear program differentiation for single-channel speech separation

    DEFF Research Database (Denmark)

    Pearlmutter, Barak A.; Olsson, Rasmus Kongsgaard

    2006-01-01

    Many apparently difficult problems can be solved by reduction to linear programming. Such problems are often subproblems within larger systems. When gradient optimisation of the entire larger system is desired, it is necessary to propagate gradients through the internally-invoked LP solver....... For instance, when an intermediate quantity z is the solution to a linear program involving constraint matrix A, a vector of sensitivities dE/dz will induce sensitivities dE/dA. Here we show how these can be efficiently calculated, when they exist. This allows algorithmic differentiation to be applied...... to algorithms that invoke linear programming solvers as subroutines, as is common when using sparse representations in signal processing. Here we apply it to gradient optimisation of over complete dictionaries for maximally sparse representations of a speech corpus. The dictionaries are employed in a single...

  2. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    -impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... are expressed and in the way the estimator is approximated. The starting point of the first method is prior probability density functions for both signal and noise and it is assumed that their Laplace transforms (moment generating functions) are available. The corresponding posterior mean integral that defines...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...

  3. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  4. Characteristics of a single-channel superconducting flux flow transistor fabricated by an AFM modification technique

    International Nuclear Information System (INIS)

    Ko, Seokcheol; Kim, Seong-Jong

    2007-01-01

    The demand for high performance, integrity, and miniaturization in the area of electronic and mechanic devices has drawn interest in the fabrication of nanostructures. However, it is difficult to fabricate the channel with nano-scale using a conventional photography techniques. AFM anodization technique is a maskless process and effective method to overcome the difficulty in fabricating a nano-scale channel. In this paper, we first present a new fabrication of a single-channel SFFT using a selective oxidation process induced by an AFM probe. The modified channel was investigated by electron probe microanalyzer (EPMA) to find the compositional variation of the transformed region. In order to confirm the operation of a single-channel SFFT, we measured the voltage-current characteristics at the temperature of liquid nitrogen by an I-V automatic measurement system. Our results indicate that the single-channel SFFT having effect as a weak link is effectively fabricated by an AFM lithography process

  5. Three-Dimensional Imaging by Self-Reference Single-Channel Digital Incoherent Holography.

    Science.gov (United States)

    Rosen, Joseph; Kelner, Roy

    2016-08-01

    Digital holography offers a reliable and fast method to image a three-dimensional scene from a single perspective. This article reviews recent developments of self-reference single-channel incoherent hologram recorders. Hologram recorders in which both interfering beams, commonly referred to as the signal and the reference beams, originate from the same observed objects are considered as self-reference systems. Moreover, the hologram recorders reviewed herein are configured in a setup of a single channel interferometer. This unique configuration is achieved through the use of one or more spatial light modulators.

  6. Three-Dimensional Imaging by Self-Reference Single-Channel Digital Incoherent Holography

    Science.gov (United States)

    Rosen, Joseph; Kelner, Roy

    2016-01-01

    Digital holography offers a reliable and fast method to image a three-dimensional scene from a single perspective. This article reviews recent developments of self-reference single-channel incoherent hologram recorders. Hologram recorders in which both interfering beams, commonly referred to as the signal and the reference beams, originate from the same observed objects are considered as self-reference systems. Moreover, the hologram recorders reviewed herein are configured in a setup of a single channel interferometer. This unique configuration is achieved through the use of one or more spatial light modulators. PMID:28757811

  7. Sensitivity of Satellite-Based Skin Temperature to Different Surface Emissivity and NWP Reanalysis Sources Demonstrated Using a Single-Channel, Viewing-Angle-Corrected Retrieval Algorithm

    Science.gov (United States)

    Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.

    2015-12-01

    Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground

  8. Response matrix method and its application to SCWR single channel stability analysis

    International Nuclear Information System (INIS)

    Zhao, Jiyun; Tseng, K.J.; Tso, C.P.

    2011-01-01

    To simulate the reactor system dynamic features during density wave oscillations (DWO), both the non-linear method and the linear method can be used. Although some transient information is lost through model linearization, the high computational efficiency and relatively accurate results make the linear analysis methodology attractive, especially for prediction of the onset of instability. In the linear stability analysis, the system models are simplified through linearization of the complex non-linear differential equations, and then, the linear differential equations are generally solved in the frequency domain through Laplace transformation. In this paper, a system response matrix method was introduced by directly solving the differential equations in the time domain. By using a system response matrix method, the complicated transfer function derivation, which must be done in the frequency domain method, can be avoided. Using the response matrix method, a model was developed and applied to the single channel or parallel channel type instability analyses of the typical proposed SCWR design. The sensitivity of the decay ratio (DR) to the axial mesh size was analyzed and it was found that the DR is not sensitive to mesh size once sufficient number of axial nodes is applied. To demonstrate the effects of the inlet orificing to the stability feature for the supercritical condition, the sensitivity of the stability to inlet orifice coefficient was conducted for hot channel. It is clearly shown that a higher inlet orifice coefficient will make the system more stable. The susceptibility of stability to operating parameters such as mass flow rate, power and system pressure was also performed. And the measure to improve the SCWR stability sensitivity to operating parameters was investigated. It was found that the SCWR stability sensitivity feature can be improved by carefully managing the inlet orifices and choosing proper operating parameters. (author)

  9. A Component Model for Cable System Calculations

    NARCIS (Netherlands)

    Nijs, J.M.M. de; Boschma, J.J.

    2012-01-01

    Unfortunately, no method yet exists for system calculations to support cable engineers with the technical challenge of increasing digital loads when confronted with ever-increasing capacity demands from commercial departments. This article introduces a reliable method of cable system calculations.

  10. Optimization of pumping schemes for 160-Gb/s single channel Raman amplified systems

    DEFF Research Database (Denmark)

    Xu, Lin; Rottwitt, Karsten; Peucheret, Christophe

    2004-01-01

    Three different distributed Raman amplification schemes-backward pumping, bidirectional pumping, and second-order pumping-are evaluated numerically for 160-Gb/s single-channel transmission. The same longest transmission distance of 2500 km is achieved for all three pumping methods with a 105-km...

  11. High-speed indoor optical wireless communication system with single channel imaging receiver.

    Science.gov (United States)

    Wang, Ke; Nirmalathas, Ampalavanapillai; Lim, Christina; Skafidas, Efstratios

    2012-04-09

    In this paper we experimentally investigate a gigabit indoor optical wireless communication system with single channel imaging receiver. It is shown that the use of single channel imaging receiver rejects most of the background light. This single channel imaging receiver is composed of an imaging lens and a small photo-sensitive area photodiode attached on a 2-axis actuator. The actuator and photodiode are placed on the focal plane of the lens to search for the focused light spot. The actuator is voice-coil based and it is low cost and commercially available. With this single channel imaging receiver, bit rate as high as 12.5 Gbps has been successfully demonstrated and the maximum error-free (BER20% has been achieved. When this system is integrated with our recently proposed optical wireless based indoor localization system, both high speed wireless communication and mobility can be provided to users over the entire room. Furthermore, theoretical analysis has been carried out and the simulation results agree well with the experiments. In addition, since the rough location information of the user is available in our proposed system, instead of searching for the focused light spot over a large area on the focal plane of the lens, only a small possible area needs to be scanned. By further pre-setting a proper comparison threshold when searching for the focused light spot, the time needed for searching can be further reduced.

  12. Single-channel source separation using non-negative matrix factorization

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard

    , in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging....

  13. Development of NUPREP PC Version and Input Structures for NUCIRC Single Channel Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Churl; Jun, Ji Su; Park, Joo Hwan

    2007-12-15

    The input file for a steady-state thermal-hydraulic code NUCIRC consists of common channel input data and specific channel input data in a case of single channel analysis. Even when all the data is ready for the 380 channels' single channel analyses, it takes long time and requires enormous effort to compose an input file by hand-editing. The automatic pre-processor for this tedious job is a NUPREP code. In this study, a NUPREP PC version has been developed from the source list in the program manual of NUCIRC-MOD2.000 that is imported in a form of an execution file. In this procedure, some errors found in PC executions and lost statements are fixed accordingly. It is confirmed that the developed NUPREP code produces input file correctly for the CANDU-6 single channel analysis. Additionally, the NUCIRC input structure and data format are summarized for a single channel analysis and the input CARDs required for the creep information of aged channels are listed.

  14. Development of NUPREP PC Version and Input Structures for NUCIRC Single Channel Analyses

    International Nuclear Information System (INIS)

    Yoon, Churl; Jun, Ji Su; Park, Joo Hwan

    2007-12-01

    The input file for a steady-state thermal-hydraulic code NUCIRC consists of common channel input data and specific channel input data in a case of single channel analysis. Even when all the data is ready for the 380 channels' single channel analyses, it takes long time and requires enormous effort to compose an input file by hand-editing. The automatic pre-processor for this tedious job is a NUPREP code. In this study, a NUPREP PC version has been developed from the source list in the program manual of NUCIRC-MOD2.000 that is imported in a form of an execution file. In this procedure, some errors found in PC executions and lost statements are fixed accordingly. It is confirmed that the developed NUPREP code produces input file correctly for the CANDU-6 single channel analysis. Additionally, the NUCIRC input structure and data format are summarized for a single channel analysis and the input CARDs required for the creep information of aged channels are listed

  15. Neutron transport model for standard calculation experiment

    International Nuclear Information System (INIS)

    Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.

    1989-01-01

    The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs

  16. Spectra for the A = 6 reactions calculated from a three-body resonance model

    Directory of Open Access Journals (Sweden)

    Paris Mark W.

    2016-01-01

    Full Text Available We develop a resonance model of the transition matrix for three-body breakup reactions of the A = 6 system and present calculations for the nucleon observed spectra, which are important for inertial confinement fusion and Big Bang nucleosynthesis (BBN. The model is motivated by the Faddeev approach where the form of the T matrix is written as a sum of the distinct Jacobi coordinate systems corresponding to particle configurations (α, n-n and (n; n-α to describe the final state. The structure in the spectra comes from the resonances of the two-body subsystems of the three-body final state, namely the singlet (T = 1 nucleon-nucleon (NN anti-bound resonance, and the Nα resonances designated the ground state (Jπ = 3−2${{{3^ - }} \\over 2}$ and first excited state (Jπ = 1−2${{{1^ - }} \\over 2}$ of the A = 5 systems 5He and 5Li. These resonances are described in terms of single-level, single-channel R-matrix parameters that are taken from analyses of NN and Nα scattering data. While the resonance parameters are approximately charge symmetric, external charge-dependent effects are included in the penetrabilities, shifts, and hard-sphere phases, and in the level energies to account for internal Coulomb differences. The shapes of the resonance contributions to the spectrum are fixed by other, two-body data and the only adjustable parameters in the model are the combinatorial amplitudes for the compound system. These are adjusted to reproduce the observed nucleon spectra from measurements at the Omega and NIF facilities. We perform a simultaneous, least-squares fit of the tt neutron spectra and the 3He3He proton spectra. Using these amplitudes we make a prediction of the α spectra for both reactions at low energies. Significant differences in the tt and 3He3He spectra are due to Coulomb effects.

  17. Shell model calculations for exotic nuclei

    International Nuclear Information System (INIS)

    Brown, B.A.; Wildenthal, B.H.

    1991-01-01

    A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs

  18. The nematocyst extract of Hydra attenuata causes single channel events in lipid bilayers.

    Science.gov (United States)

    Weber, J; Schürholz, T; Neumann, E

    1990-01-01

    The nematocyst extract of Hydra attenuata causes single conductance events in reconstituted planar lipid membranes as well as in inside-out patches derived from liposomes. The smallest single channel conductance level of the toxins is 110 pS. The conductance levels increase stepwise with time up to 2000 pS. These large conductance jumps indicate channel cooperativity. If the membrane-voltage is changed from positive to negative values, the single channel events become undefined and noisy, indicating major reorganizations of the proteins which form the channels. The molecular properties of the ionophoric component(s) of the nematocyst extract may help explain the observed macroscopic effects, such as hemolysis of human erythrocytes, after addition of the nematocyst extract.

  19. Development of TUF-ELOCA - a software tool for integrated single-channel thermal-hydraulic and fuel element analyses

    International Nuclear Information System (INIS)

    Popescu, A.I.; Wu, E.; Yousef, W.W.; Pascoe, J.; Parlatan, Y.; Kwee, M.

    2006-01-01

    The TUF-ELOCA tool couples the TUF and ELOCA codes to enable an integrated thermal-hydraulic and fuel element analysis for a single channel during transient conditions. The coupled architecture is based on TUF as the parent process controlling multiple ELOCA executions that simulate the fuel elements behaviour and is scalable to different fuel channel designs. The coupling ensures a proper feedback between the coolant conditions and fuel elements response, eliminates model duplications, and constitutes an improvement from the prediction accuracy point of view. The communication interfaces are based on PVM and allow parallelization of the fuel element simulations. Developmental testing results are presented showing realistic predictions for the fuel channel behaviour during a transient. (author)

  20. Single-channel data-validation technique for ΔP cells in turbulent flow

    International Nuclear Information System (INIS)

    Goodrich, L.D.; Brower, R.W.

    1983-01-01

    This paper discusses a single-channel-analysis, data-validation technique which is applicable to all flow-measuring devices in turbulent conditions, can be applied in either real time or batch mode, and allows online correction of zero offset and slope coefficients. This technique of validating flow measurements eliminates the need for multiple measuring devices, thus reducing the complexity of the overall instrumentation system

  1. Three-dimensional (3-D) video systems: bi-channel or single-channel optics?

    Science.gov (United States)

    van Bergen, P; Kunert, W; Buess, G F

    1999-11-01

    This paper presents the results of a comparison between two different three-dimensional (3-D) video systems, one with single-channel optics, the other with bi-channel optics. The latter integrates two lens systems, each transferring one half of the stereoscopic image; the former uses only one lens system, similar to a two-dimensional (2-D) endoscope, which transfers the complete stereoscopic picture. In our training centre for minimally invasive surgery, surgeons were involved in basic and advanced laparoscopic courses using both a 2-D system and the two 3-D video systems. They completed analog scale questionnaires in order to record a subjective impression of the relative convenience of operating in 2-D and 3-D vision, and to identify perceived deficiencies in the 3-D system. As an objective test, different experimental tasks were developed, in order to measure performance times and to count pre-defined errors made while using the two 3-D video systems and the 2-D system. Using the bi-channel optical system, the surgeon has a heightened spatial perception, and can work faster and more safely than with a single-channel system. However, single-channel optics allow the use of an angulated endoscope, and the free rotation of the optics relative to the camera, which is necessary for some operative applications.

  2. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  3. Uncertainty calculation in transport models and forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Prato, Carlo Giacomo

    . Forthcoming: European Journal of Transport and Infrastructure Research, 15-3, 64-72. 4 The last paper4 examined uncertainty in the spatial composition of residence and workplace locations in the Danish National Transport Model. Despite the evidence that spatial structure influences travel behaviour...... to increase the quality of the decision process and to develop robust or adaptive plans. In fact, project evaluation processes that do not take into account model uncertainty produce not fully informative and potentially misleading results so increasing the risk inherent to the decision to be taken...

  4. Temperature Calculations in the Coastal Modeling System

    Science.gov (United States)

    2017-04-01

    with the change of water turbidity in coastal and estuarine systems. Water quality and ecological models often require input of water temperature...of the American Society of Civil Engineers 81(717): 1–11. Sánchez, A., W. Wu, H. Li, M. E. Brown, C. W. Reed, J. D. Rosati, and Z. Demirbilek. 2014

  5. Post hoc pattern matching: assigning significance to statistically defined expression patterns in single channel microarray data

    Directory of Open Access Journals (Sweden)

    Blalock Eric M

    2007-07-01

    Full Text Available Abstract Background Researchers using RNA expression microarrays in experimental designs with more than two treatment groups often identify statistically significant genes with ANOVA approaches. However, the ANOVA test does not discriminate which of the multiple treatment groups differ from one another. Thus, post hoc tests, such as linear contrasts, template correlations, and pairwise comparisons are used. Linear contrasts and template correlations work extremely well, especially when the researcher has a priori information pointing to a particular pattern/template among the different treatment groups. Further, all pairwise comparisons can be used to identify particular, treatment group-dependent patterns of gene expression. However, these approaches are biased by the researcher's assumptions, and some treatment-based patterns may fail to be detected using these approaches. Finally, different patterns may have different probabilities of occurring by chance, importantly influencing researchers' conclusions about a pattern and its constituent genes. Results We developed a four step, post hoc pattern matching (PPM algorithm to automate single channel gene expression pattern identification/significance. First, 1-Way Analysis of Variance (ANOVA, coupled with post hoc 'all pairwise' comparisons are calculated for all genes. Second, for each ANOVA-significant gene, all pairwise contrast results are encoded to create unique pattern ID numbers. The # genes found in each pattern in the data is identified as that pattern's 'actual' frequency. Third, using Monte Carlo simulations, those patterns' frequencies are estimated in random data ('random' gene pattern frequency. Fourth, a Z-score for overrepresentation of the pattern is calculated ('actual' against 'random' gene pattern frequencies. We wrote a Visual Basic program (StatiGen that automates PPM procedure, constructs an Excel workbook with standardized graphs of overrepresented patterns, and lists of

  6. Development of new model for high explosives detonation parameters calculation

    Directory of Open Access Journals (Sweden)

    Jeremić Radun

    2012-01-01

    Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.

  7. Single channel and WDM transmission of 28 Gbaud zero-guard-interval CO-OFDM.

    Science.gov (United States)

    Zhuge, Qunbi; Morsy-Osman, Mohamed; Mousa-Pasandi, Mohammad E; Xu, Xian; Chagnon, Mathieu; El-Sahn, Ziad A; Chen, Chen; Plant, David V

    2012-12-10

    We report on the experimental demonstration of single channel 28 Gbaud QPSK and 16-QAM zero-guard-interval (ZGI) CO-OFDM transmission with only 1.34% overhead for OFDM processing. The achieved transmission distance is 5120 km for QPSK assuming a 7% forward error correction (FEC) overhead, and 1280 km for 16-QAM assuming a 20% FEC overhead. We also demonstrate the improved tolerance of ZGI CO-OFDM to residual inter-symbol interference compared to reduced-guard-interval (RGI) CO-OFDM. In addition, we report an 8-channel wavelength-division multiplexing (WDM) transmission of 28 Gbaud QPSK ZGI CO-OFDM signals over 4160 km.

  8. Single-Channel Noise Reduction using Unified Joint Diagonalization and Optimal Filtering

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom

    2014-01-01

    consider two cases, where, respectively, no distortion and distortion are incurred on the desired signal. The former can be achieved when the covariance matrix of the desired signal is rank deficient, which is the case, for example, for voiced speech. In the latter case, the covariance matrix......In this paper, the important problem of single-channel noise reduction is treated from a new perspective. The problem is posed as a filtering problem based on joint diagonalization of the covariance matrices of the desired and noise signals. More specifically, the eigenvectors from the joint...

  9. Span length and information rate optimisation in optical transmission systems using single-channel digital backpropagation.

    Science.gov (United States)

    Karanov, Boris; Xu, Tianhua; Shevchenko, Nikita A; Lavery, Domaniç; Killey, Robert I; Bayvel, Polina

    2017-10-16

    The optimisation of span length when designing optical communication systems is important from both performance and cost perspectives. In this paper, the optimisation of inter-amplifier spacing and the potential increase of span length at fixed information rates in optical communication systems with practically feasible nonlinearity compensation schemes have been investigated. It is found that in DP-16QAM, DP-64QAM and DP-256QAM systems with practical transceiver noise limitations, single-channel digital backpropagation can allow a 50% reduction in the number of amplifiers without sacrificing information rates compared to systems with optimal span lengths and linear compensation.

  10. 0.4 THz Photonic-Wireless Link With 106 Gb/s Single Channel Bitrate

    DEFF Research Database (Denmark)

    Jia, Shi; Pang, Xiaodan; Ozolins, Oskars

    2018-01-01

    THz channel is enabled by combining spectrally efficient modulation format, ultrabroadband THz transceiver and advanced digital signal processing routine. Besides that, our demonstration from system-wide implementation viewpoint also features high transmission stability, and hence shows its great......, we experimentally demonstrate a single channel 0.4 THz photonic-wireless link achieving a net data rate of beyond 100 Gb/s by using a single pair of THz emitter and receiver, without employing any spatial/frequency division multiplexing techniques. The high throughput up to 106 Gb/s within a single...

  11. A perspective on single-channel frequency-domain speech enhancement

    CERN Document Server

    Benesty, Jacob

    2010-01-01

    This book focuses on a class of single-channel noise reduction methods that are performed in the frequency domain via the short-time Fourier transform (STFT). The simplicity and relative effectiveness of this class of approaches make them the dominant choice in practical systems. Even though many popular algorithms have been proposed through more than four decades of continuous research, there are a number of critical areas where our understanding and capabilities still remain quite rudimentary, especially with respect to the relationship between noise reduction and speech distortion. All exis

  12. Precipitates/Salts Model Calculations for Various Drift Temperature Environments

    International Nuclear Information System (INIS)

    Marnier, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b)

  13. [Study on Sleep Staging Based on Support Vector Machines and Feature Selection in Single Channel Electroencephalogram].

    Science.gov (United States)

    Lin, Xiujing; Xia, Yongming; Qian, Songrong

    2015-06-01

    Sleep electroencephalogram (EEG) is an important index in diagnosing sleep disorders and related diseases. Manual sleep staging is time-consuming and often influenced by subjective factors. Existing automatic sleep staging methods have high complexity and a low accuracy rate. A sleep staging method based on support vector machines (SVM) and feature selection using single channel EEG single is proposed in this paper. Thirty-eight features were extracted from the single channel EEG signal. Then based on the feature selection method F-Score's definition, it was extended to multiclass with an added eliminate factor in order to find proper features, which were used as SVM classifier inputs. The eliminate factor was adopted to reduce the negative interaction of features to the result. Research on the F-Score with an added eliminate factor was further accomplished with the data from a standard open source database and the results were compared with none feature selection and standard F-Score feature selection. The results showed that the present method could effectively improve the sleep staging accuracy and reduce the computation time.

  14. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG.

    Science.gov (United States)

    Lee, Kwang Jin; Lee, Boreom

    2016-07-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.

  15. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG

    Directory of Open Access Journals (Sweden)

    Kwang Jin Lee

    2016-07-01

    Full Text Available Fetal heart rate (FHR is an important determinant of fetal health. Cardiotocography (CTG is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB. Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.

  16. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG

    Science.gov (United States)

    Lee, Kwang Jin; Lee, Boreom

    2016-01-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR. PMID:27376296

  17. Two-dimensional probability density analysis of single channel currents from reconstituted acetylcholine receptors and sodium channels.

    Science.gov (United States)

    Keller, B U; Montal, M S; Hartshorne, R P; Montal, M

    1990-01-01

    Two-dimensional probability density analysis of single channel current recordings was applied to two purified channel proteins reconstituted in planar lipid bilayers: Torpedo acetylcholine receptors and voltage-sensitive sodium channels from rat brain. The information contained in the dynamic history of the gating process, i.e., the time sequence of opening and closing events was extracted from two-dimensional distributions of transitions between identifiable states. This approach allows one to identify kinetic models consistent with the observables. Gating of acetylcholine receptors expresses "memory" of the transition history: the receptor has two channel open (O) states; the residence time in each of them strongly depends on both the preceding open time and the intervening closed interval. Correspondingly, the residence time in the closed (C) states depends on both the preceding open time and the preceding closed time. This result confirms the scheme that considers, at least, two transition pathways between the open and closed states and extends the details of the model in that it defines that the short-lived open state is primarily entered from long-lived closed states while the long-lived open state is accessed mainly through short-lived closed states. Since ligand binding to the acetylcholine-binding sites is a reaction with channel closed states, we infer that the longest closed state (approximately 19 ms) is unliganded, the intermediate closed state (approximately 2 ms) is singly liganded and makes transitions to the short open state (approximately 0.5 ms) and the shortest closed state (approximately 0.4 ms) is doubly liganded and isomerizes to long open states (approximately 5 ms). This is the simplest interpretation consistent with available data. In contrast, sodium channels modified with batrachotoxin to eliminate inactivation show no correlation in the sequence of channel opening and closing events, i.e., have no memory of the transition history. This

  18. In-Drift Microbial Communities Model Validation Calculations

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  19. In-Drift Microbial Communities Model Validation Calculation

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  20. In-Drift Microbial Communities Model Validation Calculations

    International Nuclear Information System (INIS)

    Jolley, D.M.

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data

  1. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  2. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    International Nuclear Information System (INIS)

    D.M. Jolley

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data

  3. The accuracy of heavy ion optical model calculations

    International Nuclear Information System (INIS)

    Kozik, T.

    1980-01-01

    There is investigated in detail the sources and magnitude of numerical errors in heavy ion optical model calculations. It is shown on example of 20 Ne + 24 Mg scattering at Esub(LAB)=100 MeV. (author)

  4. Modeling and Calculator Tools for State and Local Transportation Resources

    Science.gov (United States)

    Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.

  5. A methodology for constructing the calculation model of scientific spreadsheets

    NARCIS (Netherlands)

    Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are

  6. Mathematical models for calculating radiation dose to the fetus

    International Nuclear Information System (INIS)

    Watson, E.E.

    1992-01-01

    Estimates of radiation dose from radionuclides inside the body are calculated on the basis of energy deposition in mathematical models representing the organs and tissues of the human body. Complex models may be used with radiation transport codes to calculate the fraction of emitted energy that is absorbed in a target tissue even at a distance from the source. Other models may be simple geometric shapes for which absorbed fractions of energy have already been calculated. Models of Reference Man, the 15-year-old (Reference Woman), the 10-year-old, the five-year-old, the one-year-old, and the newborn have been developed and used for calculating specific absorbed fractions (absorbed fractions of energy per unit mass) for several different photon energies and many different source-target combinations. The Reference woman model is adequate for calculating energy deposition in the uterus during the first few weeks of pregnancy. During the course of pregnancy, the embryo/fetus increases rapidly in size and thus requires several models for calculating absorbed fractions. In addition, the increases in size and changes in shape of the uterus and fetus result in the repositioning of the maternal organs and in different geometric relationships among the organs and the fetus. This is especially true of the excretory organs such as the urinary bladder and the various sections of the gastrointestinal tract. Several models have been developed for calculating absorbed fractions of energy in the fetus, including models of the uterus and fetus for each month of pregnancy and complete models of the pregnant woman at the end of each trimester. In this paper, the available models and the appropriate use of each will be discussed. (Author) 19 refs., 7 figs

  7. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  8. Model for calculating the boron concentration in PWR type reactors

    International Nuclear Information System (INIS)

    Reis Martins Junior, L.L. dos; Vanni, E.A.

    1986-01-01

    A PWR boron concentration model has been developed for use with RETRAN code. The concentration model calculates the boron mass balance in the primary circuit as the injected boron mixes and is transported through the same circuit. RETRAN control blocks are used to calculate the boron concentration in fluid volumes during steady-state and transient conditions. The boron reactivity worth is obtained from the core concentration and used in RETRAN point kinetics model. A FSAR type analysis of a Steam Line Break Accident in Angra I plant was selected to test the model and the results obtained indicate a sucessfull performance. (Author) [pt

  9. HOM study and parameter calculation of the TESLA cavity model

    CERN Document Server

    Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  10. Microbial Communities Model Parameter Calculation for TSPA/SR

    Energy Technology Data Exchange (ETDEWEB)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  11. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  12. Subjective and Objective Quality Assessment of Single-Channel Speech Separation Algorithms

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll

    2012-01-01

    Previous studies on performance evaluation of single-channel speech separation (SCSS) algorithms mostly focused on automatic speech recognition (ASR) accuracy as their performance measure. Assessing the separated signals by different metrics other than this has the benefit that the results...... are expected to carry on to other applications beyond ASR. In this paper, in addition to conventional speech quality metrics (PESQ and SNRloss), we also evaluate the separation systems output using different source separation metrics: blind source separation evaluation (BSS EVAL) and perceptual evaluation...... that PESQ and PEASS quality metrics predict well the subjective quality of separated signals obtained by the separation systems. From the results it is observed that the short-time objective intelligibility (STOI) measure predict the speech intelligibility results....

  13. Digital single-channel seismic-reflection data from western Santa Monica basin

    Science.gov (United States)

    Normark, William R.; Piper, David J.W.; Sliter, Ray W.; Triezenberg, Peter; Gutmacher, Christina E.

    2006-01-01

    During a collaborative project in 1992, Geological Survey of Canada and United States Geological Survey scientists obtained about 850 line-km of high-quality single-channel boomer and sleeve-gun seismic-reflection profiles across Hueneme, Mugu and Dume submarine fans, Santa Monica Basin, off southern California. The goals of this work were to better understand the processes that lead to the formation of sandy submarine fans and the role of sea-level changes in controlling fan development. This report includes a trackline map of the area surveyed, as well as images of the sleeve-gun profiles and the opportunity to download both images and digital data files (SEG-Y) of all the sleeve-gun profiles.

  14. Single-channel labyrinthine metasurfaces as perfect sound absorbers with tunable bandwidth

    Science.gov (United States)

    Liu, Liu; Chang, Huiting; Zhang, Chi; Hu, Xinhua

    2017-08-01

    Perfect sound absorbers with a deep-subwavelength thickness are important to applications such as noise reduction and sound detection. But their absorption bandwidths are usually narrow and difficult to adjust. A recent solution for this problem relies on multiple-resonator metasurfaces, which are hard to fabricate. Here, we report on the design, fabrication, and characterization of a single-channel labyrinthine metasurface, which allows total sound absorption at resonant frequency when appropriate amounts of porous media (or critical sound losses) are introduced in the channels. The absorption bandwidth can be tuned by changing the cross-sectional areas of channels. A tradeoff is found between the absorption bandwidth and the metasurface thickness. However, large tunability in the relative absorption bandwidth (from 17% to 121%) is still attainable by such metasurfaces with a deep-subwavelength thickness (0.03-0.13λ).

  15. Drowsiness detection for single channel EEG by DWT best m-term approximation

    Directory of Open Access Journals (Sweden)

    Tiago da Silveira

    Full Text Available Introduction In this paper we propose a promising new technique for drowsiness detection. It consists of applying the best m-term approximation on a single-channel electroencephalography (EEG signal preprocessed through a discrete wavelet transform. Methods In order to classify EEG epochs as awake or drowsy states, the most significant m terms from the wavelet expansion of an EEG signal are selected according to the magnitude of their coefficients related to the alpha and beta rhythms. Results By using a simple thresholding strategy it provides hit rates comparable to those using more complex techniques. It was tested on a set of 6 hours and 50 minutes EEG drowsiness signals from PhysioNet Sleep Database yielding an overall sensitivity (TPR of 84.98% and 98.65% of precision (PPV. Conclusion The method has proved itself efficient at separating data from different brain rhythms, thus alleviating the requirement for complex post-processing classification algorithms.

  16. Study on single-channel signals of water Cherenkov detector array for the LHAASO project

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.C., E-mail: lihuicai@ihep.ac.cn [University of Nankai, Tianjin 300071 (China); Yao, Z.G.; Chen, M.J. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Yu, C.X. [University of Nankai, Tianjin 300071 (China); Zha, M.; Wu, H.R.; Gao, B.; Wang, X.J. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Liu, J.Y.; Liao, W.Y. [University of Nankai, Tianjin 300071 (China); Huang, D.Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2017-05-11

    The Large High Altitude Air Shower Observatory (LHAASO) is planned to be built at Daocheng, Sichuan Province, China. The water Cherenkov detector array (WCDA), with an area of 78,000 m{sup 2} and capacity of 350,000 tons of purified water, is one of the major components of the LHAASO project. A 9-cell detector prototype array has been built at the Yangbajing site, Tibet, China to comprehensively understand the water Cherenkov technique and investigate the engineering issues of WCDA. In this paper, the rate and charge distribution of single-channel signals are evaluated using a full detail Monte Carlo simulation. The results are discussed and compared with the results obtained with prototype array.

  17. batman: BAsic Transit Model cAlculatioN in Python

    Science.gov (United States)

    Kreidberg, Laura

    2015-11-01

    I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .

  18. Microscopic interacting boson model calculations for even–even ...

    Indian Academy of Sciences (India)

    one of the goals of the present study is to test interacting boson model calculations in the mass region of A ∼= 130 by comparing them with some previous experimental and theoretical results. The interacting boson model offers a simple Hamiltonian, capable of describing collective nuclear properties across a wide range of ...

  19. Calculating gait kinematics using MR-based kinematic models.

    Science.gov (United States)

    Scheys, Lennart; Desloovere, Kaat; Spaepen, Arthur; Suetens, Paul; Jonkers, Ilse

    2011-02-01

    Rescaling generic models is the most frequently applied approach in generating biomechanical models for inverse kinematics. Nevertheless it is well known that this procedure introduces errors in calculated gait kinematics due to: (1) errors associated with palpation of anatomical landmarks, (2) inaccuracies in the definition of joint coordinate systems. Based on magnetic resonance (MR) images, more accurate, subject-specific kinematic models can be built that are significantly less sensitive to both error types. We studied the difference between the two modelling techniques by quantifying differences in calculated hip and knee joint kinematics during gait. In a clinically relevant patient group of 7 pediatric cerebral palsy (CP) subjects with increased femoral anteversion, gait kinematic were calculated using (1) rescaled generic kinematic models and (2) subject-specific MR-based models. In addition, both sets of kinematics were compared to those obtained using the standard clinical data processing workflow. Inverse kinematics, calculated using rescaled generic models or the standard clinical workflow, differed largely compared to kinematics calculated using subject-specific MR-based kinematic models. The kinematic differences were most pronounced in the sagittal and transverse planes (hip and knee flexion, hip rotation). This study shows that MR-based kinematic models improve the reliability of gait kinematics, compared to generic models based on normal subjects. This is the case especially in CP subjects where bony deformations may alter the relative configuration of joint coordinate systems. Whilst high cost impedes the implementation of this modeling technique, our results demonstrate that efforts should be made to improve the level of subject-specific detail in the joint axes determination. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Optimizing the calculation grid for atmospheric dispersion modelling.

    Science.gov (United States)

    Van Thielen, S; Turcanu, C; Camps, J; Keppens, R

    2015-04-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  2. Summary of Calculation Performed with NPIC's New FGR Model

    International Nuclear Information System (INIS)

    Jiao Yongjun; Li Wenjie; Zhou Yi; Xing Shuo

    2013-01-01

    1. Introduction The NPIC modeling group has performed calculations on both real cases and idealized cases in FUMEX II and III data packages. The performance code we used is COPERNIC 2.4 developed by AREVA but a new FGR model has been added. Therefore, a comparison study has been made between the Bernard model (V2.2) and the new model, in order to evaluate the performance of the new model. As mentioned before, the focus of our study lies in thermal fission gas release, or more specifically the grain boundary bubble behaviors. 2. Calculation method There are some differences between the calculated burnup and measured burnup in many real cases. Considering FGR is significant dependent on rod average burnup, a multiplicative factor on fuel rod linear power, i.e. FQE, is applied and adjusted in the calculations to ensure the calculated burnup generally equals the measured burnup. Also, a multiplicative factor on upper plenum volume, i.e. AOPL, is applied and adjusted in the calculations to ensure the calculated free volume equals pre-irradiation data of total free volume in rod. Cladding temperatures were entered if they were provided . Otherwise the cladding temperatures are calculated from the inlet coolant temperature. The results are presented in excel form as an attachment of this paper, including thirteen real cases and three idealized cases. Three real cases (BK353, BK370, US PWR TSQ022) are excluded from validation of the new model, because the athermal release predicted is even greater than release measured, which means a negative thermal release. Obviously it is not reasonable for validation, but the results are also listed in excel (sheet 'Cases excluded from validation'). 3. Results The results of 10 real cases are listed in sheet 'Steady case summary', which summarizes measured and predicted values of Bu, FGR for each case, and plots M/P ratio of FGR calculation by different models in COPERNIC. A statistic comparison was also made with three indexes, i

  3. Model calculations of groundwater conditions on Sternoe peninsula

    International Nuclear Information System (INIS)

    Axelsson, C.-L.; Carlsson, L.

    1979-09-01

    The groundwater condition within the bedrock of Sternoe was calculated by the use of a two-dimensional FEM-model. Five sections were laid out over the area. The sections had a depth of five km and length between two and six km. First the piezometric head was calculated in two major tectonic zones where the hydraulic conductivity was set to 10 -6 m/s. In the other sections of which two cross the tectonic zones, the bedrock was assumed to have hydraulic conductivities of 10 -8 m/s in the uppermost 300 m and 10 -11 m/s in the rest. From the maps of the piezometric head obtained, the flow time was calculated for the groundwater from 500 meters depth to a tectonic zone or to the 300 meters level below the sea. This calculation was performed for two sections both with and without tectonic zones. Also the influence of groundwater discharge from a well in one point in one of the tectonic zones was calculated. The kinematic porosity was assumed 10 -4 . The result showed that the flow time varied between 1000 to 500 000 years within the area with the exception of the nearest 100 m zone to any of the tectonic zones. For further calculations the use of three-dimensional models was proposed. (Auth.)

  4. Optimizing the calculation grid for atmospheric dispersion modelling

    International Nuclear Information System (INIS)

    Van Thielen, S.; Turcanu, C.; Camps, J.; Keppens, R.

    2015-01-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. - Highlights: • Grid points for atmospheric dispersion calculations are optimized. • Using heuristics the optimization problem results into different grid shapes. • Comparison between optimized models and the Noodplan grid is performed

  5. Precision calculations in supersymmetric extensions of the Standard Model

    International Nuclear Information System (INIS)

    Slavich, P.

    2013-01-01

    This dissertation is organized as follows: in the next chapter I will summarize the structure of the supersymmetric extensions of the standard model (SM), namely the MSSM (Minimal Supersymmetric Standard Model) and the NMSSM (Next-to-Minimal Supersymmetric Standard Model), I will provide a brief overview of different patterns of SUSY (supersymmetry) breaking and discuss some issues on the renormalization of the input parameters that are common to all calculations of higher-order corrections in SUSY models. In chapter 3 I will review and describe computations on the production of MSSM Higgs bosons in gluon fusion. In chapter 4 I will review results on the radiative corrections to the Higgs boson masses in the NMSSM. In chapter 5 I will review the calculation of BR(B → X s γ in the MSSM with Minimal Flavor Violation (MFV). Finally, in chapter 6 I will briefly summarize the outlook of my future research. (author)

  6. Ab initio calculations and modelling of atomic cluster structure

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Lyalin, Andrey G.; Solov'yov, Andrey V.

    2004-01-01

    framework for modelling the fusion process of noble gas clusters is presented. We report the striking correspondence of the peaks in the experimentally measured abundance mass spectra with the peaks in the size-dependence of the second derivative of the binding energy per atom calculated for the chain...... of the noble gas clusters up to 150 atoms....

  7. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1999-01-01

    The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)

  8. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    All these valleys are located in Loess plateau of northern Shaanxi, China. The existing calculation model for single hori- zontal seepage well was built by Wang and Zhang. (2007) based on theory of coupled seepage-pipe flow and equivalent hydraulic conductivity (Chen. 1995; Chen and Lin 1998a, 1998b; Chen and.

  9. A kinematic model for calculating the magnitude of angular ...

    African Journals Online (AJOL)

    Keplerian velocity laws imply the existence of velocity shear and shear viscosity within an accretion disk. Due to this viscosity, angular momentum is transferred from the faster moving inner regions to the slower-moving outer regions of the disk. Here we have formulated a model for calculating the magnitude of angular ...

  10. Black Hole Entropy Calculation in a Modified Thin Film Model

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the ...

  11. Characterization of ryanodine receptor type 1 single channel activity using "on-nucleus" patch clamp.

    Science.gov (United States)

    Wagner, Larry E; Groom, Linda A; Dirksen, Robert T; Yule, David I

    2014-08-01

    In this study, we provide the first description of the biophysical and pharmacological properties of ryanodine receptor type 1 (RyR1) expressed in a native membrane using the on-nucleus configuration of the patch clamp technique. A stable cell line expressing rabbit RyR1 was established (HEK-RyR1) using the FLP-in 293 cell system. In contrast to untransfected cells, RyR1 expression was readily demonstrated by immunoblotting and immunocytochemistry in HEK-RyR1 cells. In addition, the RyR1 agonists 4-CMC and caffeine activated Ca(2+) release that was inhibited by high concentrations of ryanodine. On nucleus patch clamp was performed in nuclei prepared from HEK-RyR1 cells. Raising the [Ca(2+)] in the patch pipette resulted in the appearance of a large conductance cation channel with well resolved kinetics and the absence of prominent subconductance states. Current versus voltage relationships were ohmic and revealed a chord conductance of ∼750pS or 450pS in symmetrical 250mM KCl or CsCl, respectively. The channel activity was markedly enhanced by caffeine and exposure to ryanodine resulted in the appearance of a subconductance state with a conductance ∼40% of the full channel opening with a Po near unity. In total, these properties are entirely consistent with RyR1 channel activity. Exposure of RyR1 channels to cyclic ADP ribose (cADPr), nicotinic acid adenine dinucleotide phosphate (NAADP) or dantrolene did not alter the single channel activity stimulated by Ca(2+), and thus, it is unlikely these molecules directly modulate RyR1 channel activity. In summary, we describe an experimental platform to monitor the single channel properties of RyR channels. We envision that this system will be influential in characterizing disease-associated RyR mutations and the molecular determinants of RyR channel modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The role of hand calculations in ground water flow modeling.

    Science.gov (United States)

    Haitjema, Henk

    2006-01-01

    Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.

  13. Nuclear reaction matrix calculations with a shell-model Q

    International Nuclear Information System (INIS)

    Barrett, B.R.; McCarthy, R.J.

    1976-01-01

    Das Barrett-Hewitt-McCarthy (BHM) method for calculating the nuclear reaction matrix G is used to compute shell-model matrix elements for A = 18 nuclei. The energy denominators in intermediate states containing one unoccupied single-particle (s.p.) state and one valence s.p. state are treated correctly, in contrast to previous calculations. These corrections are not important for valence-shell matrix elements but are found to lead to relatively large changes in cross-shell matrix elements involved in core-polarization diagrams. (orig.) [de

  14. Reactor burning calculations for a model reversed field pattern

    International Nuclear Information System (INIS)

    Yeung, B.C.; Long, J.W.; Newton, A.A.

    1976-01-01

    An outline pinch reactor scheme and a study of electrical engineering problems for cyclic operation has been further developed and a comparison of physics aspects and capital cost made with Tokamak which has many similar features. Since the properties of reversed field pinches (RFP) are now better understood more detailed studies have been made and first results of burn calculations given. Results of the burn calculations are summarised. These are based on a D-T burning model used for Tokamak with changes appropriate for RFP. (U.K.)

  15. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  16. Modelling of Control Bars in Calculations of Boiling Water Reactors

    International Nuclear Information System (INIS)

    Khlaifi, A.; Buiron, L.

    2004-01-01

    The core of a nuclear reactor is generally composed of a neat assemblies of fissile material from where neutrons were descended. In general, the energy of fission is extracted by a fluid serving to cool clusters. A reflector is arranged around the assemblies to reduce escaping of neutrons. This is made outside the reactor core. Different mechanisms of reactivity are generally necessary to control the chain reaction. Manoeuvring of Boiling Water Reactor takes place by controlling insertion of absorbent rods to various places of the core. If no blocked assembly calculations are known and mastered, blocked assembly neutronic calculation are delicate and often treated by case to case in present studies [1]. Answering the question how to model crossbar for the control of a boiling water reactor ? requires the choice of a representation level for every chain of variables, the physical model, and its representing equations, etc. The aim of this study is to select the best applicable parameter serving to calculate blocked assembly of a Boiling Water Reactor. This will be made through a range of representative configurations of these reactors and used absorbing environment, in order to illustrate strategies of modelling in the case of an industrial calculation. (authors)

  17. Single-Channel Blind Estimation of Arterial Input Function and Tissue Impulse Response in DCE-MRI

    Czech Academy of Sciences Publication Activity Database

    Taxt, T.; Jiřík, Radovan; Rygh, C. B.; Grüner, R.; Bartoš, M.; Andersen, E.; Curry, F. R.; Reed, R. K.

    2012-01-01

    Roč. 59, č. 4 (2012), s. 1012-1021 ISSN 0018-9294 Institutional support: RVO:68081731 Keywords : arterial input function (AIF) * blind deconvolution * dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) * multichannel * perfusion * single channel Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.348, year: 2012

  18. Single channel 112Gbit/sec PAM4 at 56Gbaud with digital signal processing for data centers applications.

    Science.gov (United States)

    Sadot, Dan; Dorman, G; Gorshtein, Albert; Sonkin, Eduard; Vidal, Or

    2015-01-26

    112Gbit/sec DSP-based single channel transmission of PAM4 at 56Gbaud over 15GHz of effective analog bandwidth is experimentally demonstrated. The DSP enables use of mature 25G optoelectronics for 2-10km datacenter intra-connections, and 8Tbit/sec over 80km interconnections between data centers.

  19. Application of nuclear models to neutron nuclear cross section calculations

    International Nuclear Information System (INIS)

    Young, P.G.

    1983-01-01

    Nuclear theory is used increasingly to supplement and extend the nuclear data base that is available for applied studies. Areas where theoretical calculations are most important include the determination of neutron cross sections for unstable fission products and transactinide nuclei in fission reactor or nuclear waste calculations and for meeting the extensive dosimetry, activation, and neutronic data needs associated with fusion reactor development, especially for neutron energies above 14 MeV. Considerable progress has been made in the use of nuclear models for data evaluation and, particularly, in the methods used to derive physically meaningful parameters for model calculations. Theoretical studies frequently involve use of spherical and deformed optical models, Hauser-Feshbach statistical theory, preequilibrium theory, direct-reaction theory and often make use of gamma-ray strength function models and phenomenological (or microscopic) level density prescriptions. The development, application and limitations of nuclear models for data evaluation are discussed in this paper, with emphasis on the 0.1 to 50 MeV energy range. (Auth.)

  20. Miniature, Single Channel, Memory-Based, High-G Acceleration Recorder (Millipen)

    International Nuclear Information System (INIS)

    Rohwer, Tedd A.

    1999-01-01

    The Instrumentation and Telemetry Departments at Sandia National Laboratories have been instrumenting earth penetrators for over thirty years. Recorded acceleration data is used to quantify penetrator performance. Penetrator testing has become more difficult as desired impact velocities have increased. This results in the need for small-scale test vehicles and miniature instrumentation. A miniature recorder will allow penetrator diameters to significantly decrease, opening the window of testable parameters. Full-scale test vehicles will also benefit from miniature recorders by using a less intrusive system to instrument internal arming, fusing, and firing components. This single channel concept is the latest design in an ongoing effort to miniaturize the size and reduce the power requirement of acceleration instrumentation. A micro-controller/memory based system provides the data acquisition, signal conditioning, power regulation, and data storage. This architecture allows the recorder, including both sensor and electronics, to occupy a volume of less than 1.5 cubic inches, draw less than 200mW of power, and record 15kHz data up to 40,000 gs. This paper will describe the development and operation of this miniature acceleration recorder

  1. A single-channel 10-bit 160 MS/s SAR ADC in 65 nm CMOS

    Science.gov (United States)

    Yuxiao, Lu; Lu, Sun; Zhe, Li; Jianjun, Zhou

    2014-04-01

    This paper demonstrates a single-channel 10-bit 160 MS/s successive-approximation-register (SAR) analog-to-digital converter (ADC) in 65 nm CMOS process with a 1.2 V supply voltage. To achieve high speed, a new window-opening logic based on the asynchronous SAR algorithm is proposed to minimize the logic delay, and a partial set-and-down DAC with binary redundancy bits is presented to reduce the dynamic comparator offset and accelerate the DAC settling. Besides, a new bootstrapped switch with a pre-charge phase is adopted in the track and hold circuits to increase speed and reduce area. The presented ADC achieves 52.9 dB signal-to-noise distortion ratio and 65 dB spurious-free dynamic range measured with a 30 MHz input signal at 160 MHz clock. The power consumption is 9.5 mW and a core die area of 250 × 200 μm2 is occupied.

  2. Single-channel color image encryption based on iterative fractional Fourier transform and chaos

    Science.gov (United States)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A single-channel color image encryption is proposed based on iterative fractional Fourier transform and two-coupled logistic map. Firstly, a gray scale image is constituted with three channels of the color image, and permuted by a sequence of chaotic pairs which is generated by two-coupled logistic map. Firstly, the permutation image is decomposed into three components again. Secondly, the first two components are encrypted into a single one based on iterative fractional Fourier transform. Similarly, the interim image and third component are encrypted into the final gray scale ciphertext with stationary white noise distribution, which has camouflage property to some extent. In the process of encryption and description, chaotic permutation makes the resulting image nonlinear and disorder both in spatial domain and frequency domain, and the proposed iterative fractional Fourier transform algorithm has faster convergent speed. Additionally, the encryption scheme enlarges the key space of the cryptosystem. Simulation results and security analysis verify the feasibility and effectiveness of this method.

  3. Single-channel color image encryption using phase retrieve algorithm in fractional Fourier domain

    Science.gov (United States)

    Sui, Liansheng; Xin, Meiting; Tian, Ailing; Jin, Haiyan

    2013-12-01

    A single-channel color image encryption is proposed based on a phase retrieve algorithm and a two-coupled logistic map. Firstly, a gray scale image is constituted with three channels of the color image, and then permuted by a sequence of chaotic pairs generated by the two-coupled logistic map. Secondly, the permutation image is decomposed into three new components, where each component is encoded into a phase-only function in the fractional Fourier domain with a phase retrieve algorithm that is proposed based on the iterative fractional Fourier transform. Finally, an interim image is formed by the combination of these phase-only functions and encrypted into the final gray scale ciphertext with stationary white noise distribution by using chaotic diffusion, which has camouflage property to some extent. In the process of encryption and decryption, chaotic permutation and diffusion makes the resultant image nonlinear and disorder both in spatial domain and frequency domain, and the proposed phase iterative algorithm has faster convergent speed. Additionally, the encryption scheme enlarges the key space of the cryptosystem. Simulation results and security analysis verify the feasibility and effectiveness of this method.

  4. Single Channel Analysis of Isoflurane and Ethanol Enhancement of Taurine-Activated Glycine Receptors.

    Science.gov (United States)

    Kirson, Dean; Todorovic, Jelena; Mihic, S John

    2018-01-01

    The amino acid taurine is an endogenous ligand acting on glycine receptors (GlyRs), which is released by astrocytes in many brain regions, such as the nucleus accumbens and prefrontal cortex. Taurine is a partial agonist with an efficacy significantly lower than that of glycine. Allosteric modulators such as ethanol and isoflurane produce leftward shifts of glycine concentration-response curves but have no effects at saturating glycine concentrations. In contrast, in whole-cell electrophysiology studies these modulators increase the effects of saturating taurine concentrations. A number of possible mechanisms may explain these enhancing effects, including modulator effects on conductance, channel open times, or channel closed times. We used outside-out patch-clamp single channel electrophysiology to investigate the mechanism of action of 200 mM ethanol and 0.55 mM isoflurane in enhancing the effects of a saturating concentration of taurine. Neither modulator enhanced taurine-mediated conductance. Isoflurane increased the probability of channel opening. Isoflurane also increased the lifetimes of the two shortest open dwell times while both agents decreased the likelihood of occurrence of the longest-lived intracluster channel-closing events. The mechanism of enhancement of GlyR functioning by these modulators is dependent on the efficacy of the agonist activating the receptor and the concentration of agonist tested. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  5. Portable single channel analyzer incorporated with a GM counter for radiation protection

    International Nuclear Information System (INIS)

    Chenghsin Mao

    1988-01-01

    A compact size of single channel analyzer incorporated with a GM counter has been developed. It measures 8.7 cm (W) x 22.2 cm (L) x 4.4 cm (H) and weighs 0.58 kg excluding the detectors. An adjustable high voltage of 0-1000 V is included with an error of ± 0.1% and powered by three mercury batteries of 9 V each. Both the upper and lower level discriminators are set at 0 - 5 V with an error of ± 1%. The timer can be set at either 0 - 99 sec or 0 - 99 min with a buzzer alarm. The resolution of pulse is 5 μs plus the pulse width. The LCD display is either 3 1/2 or 4 digits. The rise time of shaping circuit is 1 μs with a band width of 350 kHz. The voltage indicator for battery is set at 7.5 V. All integrated circuits are of CMOS with low cost OPAMP. Some examples for field applications are given

  6. Single channel planar lipid bilayer recordings of the melittin variant MelP5.

    Science.gov (United States)

    Fennouri, Aziz; Mayer, Simon Finn; Schroeder, Thomas B H; Mayer, Michael

    2017-10-01

    MelP5 is a 26 amino acid peptide derived from melittin, the main active constituent of bee venom, with five amino acid replacements. The pore-forming activity of MelP5 in lipid membranes is attracting attention because MelP5 forms larger pores and induces dye leakage through liposome membranes at a lower concentration than melittin. Studies of MelP5 have so far focused on ensemble measurements of membrane leakage and impedance; here we extend this characterization with an electrophysiological comparison between MelP5 and melittin using planar lipid bilayer recordings. These experiments reveal that MelP5 pores in lipid membranes composed of 3:1 phosphatidylcholine:cholesterol consist of an average of 10 to 12 monomers compared to an average of 3 to 9 monomers for melittin. Both peptides form transient pores with dynamically varying conductance values similar to previous findings for melittin, but MelP5 occasionally also forms stable, well-defined pores with single channel conductance values that vary greatly and range from 50 to 3000pS in an electrolyte solution containing 100mM KCl. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Effect of intermittent hypoxic training on hypoxia tolerance based on single-channel EEG.

    Science.gov (United States)

    Zhang, Tinglin; Wang, You; Li, Guang

    2016-03-23

    A single-channel algorithm was proposed in order to study effect of intermittent hypoxic training on hypoxia tolerance based on EEG pattern. EEG was decomposed by ensemble empirical mode decomposition into a finite number of intrinsic mode functions (IMFs) based on the intrinsic local characteristic time scale. Analytic amplitude, analytic frequency, and recurrence property quantified by recurrence quantification analysis were explored on IMFs, and the first two scales revealed difference between normal EEG and hypoxia EEG. Classification accuracy of hypoxia EEG and normal EEG could reach 67.8% before decline of neurobehavioral ability, which represented that hypoxia EEG pattern could be detected at an early stage. Classification accuracy of hypoxia EEG and normal EEG increased with time and deepened intensity of hypoxia was observed by regular shift of hypoxia EEG pattern with time in a three dimensional subspace. The reduced shift and classification accuracy after intermittent hypoxic training represented that hypoxia tolerance enhanced. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Investigation of Transformer Model for TRV Calculation by EMTP

    Science.gov (United States)

    Thein, Myo Min; Ikeda, Hisatoshi; Harada, Katsuhiko; Ohtsuka, Shinya; Hikita, Masayuki; Haginomori, Eiichi; Koshiduka, Tadashi

    Analysis of the EMTP transformer model was performed with the 4kVA two windings low voltage transformer with the current injection (CIJ) measurement method to study a transient recovery voltage (TRV) at the transformer limited fault (TLF) current interrupting condition. Tested transformer's impedance was measured by the frequency response analyzer (FRA). From FRA measurement graphs leakage inductance, stray capacitance and resistance were calculated. The EMTP transformer model was constructed with those values. The EMTP simulation was done for a current injection circuit by using transformer model. The experiment and simulation results show a reasonable agreement.

  9. A note on vector flux models for radiation dose calculations

    International Nuclear Information System (INIS)

    Kern, J.W.

    1994-01-01

    This paper reviews and extends modelling of anisotropic fluxes for radiation belt protons to provide closed-form equations for vector proton fluxes and proton flux anisotropy in terms of standard omnidirectional flux models. These equations provide a flexible alternative to the date-based vector flux models currently available. At higher energies, anisotropy of trapped proton flux in the upper atmosphere depends strongly on the variation of atmospheric density with altitude. Calculations of proton flux anisotropies using present models require specification of the average atmospheric density along trapped particle trajectories and its variation with mirror point altitude. For an isothermal atmosphere, calculations show that in a dipole magnetic field, the scale height of this trajectory-averaged density closely approximates the scale height of the atmosphere at the mirror point of the trapped particle. However, for the earth's magnetic field, the altitudes of mirror points vary for protons drifting in longitude. This results in a small increase in longitude-averaged scale heights compared to the atmospheric scale heights at minimum mirror point altitudes. The trajectory-averaged scale heights are increased by about 10-20% over scale heights from standard atmosphere models for protons mirroring at altitudes less than 500 km in the South Atlantic Anomaly Atmospheric losses of protons in the geomagnetic field minimum in the South Atlantic Anomaly control proton flux anisotropies of interest for radiation studies in low earth orbit. Standard atmosphere models provide corrections for diurnal, seasonal and solar activity-driven variations. Thus, determination of an ''equilibrium'' model of trapped proton fluxes of a given energy requires using a scale height that is time-averaged over the lifetime of the protons. The trajectory-averaged atmospheric densities calculated here lead to estimates for trapped proton lifetimes. These lifetimes provide appropriate time

  10. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1998-01-01

    The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)

  11. The EDF/SEPTEN crisis team calculation tools and models

    International Nuclear Information System (INIS)

    De Magondeaux, B.; Grimaldi, X.

    1993-01-01

    Electricite de France (EDF) has developed a set of simplified tools and models called TOUTEC and CRISALIDE which are devoted to be used by the French utility National Crisis Team in order to perform the task of diagnosis and prognosis during an emergency situation. As a severe accident could have important radiological consequences, this method is focused on the diagnosis of the state of the safety barriers and on the prognosis of their behaviour. These tools allow the crisis team to deliver public authorities with information on the radiological risk and to provide advices to manage the accident on the damaged unit. At a first level, TOUTEC is intended to complement the hand-book with simplified calculation models and predefined relationships. It can avoid tedious calculation during stress conditions. The main items are the calculation of the primary circuit breach size and the evaluation of hydrogen over pressurization. The set of models called CRISALIDE is devoted to evaluate the following critical parameters: delay before core uncover, which would signify more severe consequences if it occurs, containment pressure behaviour and finally source term. With these models, crisis team comes able to take into account combinations of boundary conditions according to safety and auxiliary systems availability

  12. Use of the Strong Collision Model to Calculate Spin Relaxation

    Science.gov (United States)

    Wang, D.; Chow, K. H.; Smadella, M.; Hossain, M. D.; MacFarlane, W. A.; Morris, G. D.; Ofer, O.; Morenzoni, E.; Salman, Z.; Saadaoui, H.; Song, Q.; Kiefl, R. F.

    The strong collision model is used to calculate spin relaxation of a muon or polarized radioactive nucleus in contact with a fluctuating environment. We show that on a time scale much longer than the mean time between collisions (fluctuations) the longitudinal polarization decays exponentially with a relaxation rate equal to a sum of Lorentzians-one for each frequency component in the static polarization function ps(t).

  13. Model and calculation of in situ stresses in anisotropic formations

    Energy Technology Data Exchange (ETDEWEB)

    Yuezhi, W.; Zijun, L.; Lixin, H. [Jianghan Petroleum Institute, (China)

    1997-08-01

    In situ stresses in transversely isotropic material in relation to wellbore stability have been investigated. Equations for three horizontal in- situ stresses and a new formation fracture pressure model were described, and the methodology for determining the elastic parameters of anisotropic rocks in the laboratory was outlined. Results indicate significantly smaller differences between theoretically calculated pressures and actual formation pressures than results obtained by using the isotropic method. Implications for improvements in drilling efficiency were reviewed. 13 refs., 6 figs.

  14. Calculation of relativistic model stars using Regge calculus

    International Nuclear Information System (INIS)

    Porter, J.

    1987-01-01

    A new approach to the Regge calculus, developed in a previous paper, is used in conjunction with the velocity potential version of relativistic fluid dynamics due to Schutz [1970, Phys. Rev., D, 2, 2762] to calculate relativistic model stars. The results are compared with those obtained when the Tolman-Oppenheimer-Volkov equations are solved by other numerical methods. The agreement is found to be excellent. (author)

  15. Structure-dynamic model verification calculation of PWR 5 tests

    International Nuclear Information System (INIS)

    Engel, R.

    1980-02-01

    Within reactor safety research project RS 16 B of the German Federal Ministry of Research and Technology (BMFT), blowdown experiments are conducted at Battelle Institut e.V. Frankfurt/Main using a model reactor pressure vessel with a height of 11,2 m and internals corresponding to those in a PWR. In the present report the dynamic loading on the pressure vessel internals (upper perforated plate and barrel suspension) during the DWR 5 experiment are calculated by means of a vertical and horizontal dynamic model using the CESHOCK code. The equations of motion are resolved by direct integration. (orig./RW) [de

  16. Mathematical model of kinetostatithic calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    A. S. Sidorenko

    2016-01-01

    Full Text Available Currently widely used graphical-analytical methods of analysis largely obsolete, replaced by various analytical methods using computer technology. Therefore, of particular interest is the development of a mathematical model kinetostatical calculation mechanisms in the form of library procedures of calculation for all powered two groups Assyrians (GA and primary level. Before resorting to the appropriate procedure that computes all the forces in the kinematic pairs, you need to compute inertial forces, moments of forces of inertia and all external forces and moments acting on this GA. To this end shows the design diagram of the power analysis for each species GA of the second class, as well as the initial link. Finding reactions in the internal and external kinematic pairs based on equilibrium conditions with the account of forces of inertia and moments of inertia forces (Dalembert principle. Thus obtained equations of kinetostatical for their versatility have been solved by the Cramer rule. Thus, for each GA of the second class were found all 6 unknowns: the forces in the kinematic pairs, the directions of these forces as well as forces the shoulders. If we study kinetostatic mechanism with parallel consolidation of two GA in the initial link, in this case, power is the geometric sum of the forces acting on the primary link from the discarded GA. Thus, the obtained mathematical model kinetostatical calculation mechanisms in the form of libraries of mathematical procedures for determining reactions of all GA of the second class. The mathematical model kinetostatical calculation makes it relatively simple to implement its software implementation.

  17. Freight Calculation Model: A Case Study of Coal Distribution

    Science.gov (United States)

    Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.

    2018-03-01

    Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.

  18. Improved SVR Model for Multi-Layer Buildup Factor Calculation

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2006-01-01

    The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined

  19. 2HDMC — two-Higgs-doublet model calculator

    Science.gov (United States)

    Eriksson, David; Rathsman, Johan; Stål, Oscar

    2010-04-01

    We describe version 1.0.6 of the public C++ code 2HDMC, which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body — and some three-body — decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. New version program summaryProgram title: 2HDMC Catalogue identifier: AEFI_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 110 No. of bytes in distributed program, including test data, etc.: 92 731 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Catalogue identifier of previous version: AEFI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2010) 189 Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Does the new version supersede the previous version?: Yes Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off

  20. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  1. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    Science.gov (United States)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  2. Interfacing sensory input with motor output: does the control architecture converge to a serial process along a single channel?

    Science.gov (United States)

    van de Kamp, Cornelis; Gawthrop, Peter J; Gollee, Henrik; Lakie, Martin; Loram, Ian D

    2013-01-01

    Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control.

  3. A modified microdosimetric kinetic model for relative biological effectiveness calculation

    Science.gov (United States)

    Chen, Yizheng; Li, Junli; Li, Chunyan; Qiu, Rui; Wu, Zhen

    2018-01-01

    In the heavy ion therapy, not only the distribution of physical absorbed dose, but also the relative biological effectiveness (RBE) weighted dose needs to be taken into account. The microdosimetric kinetic model (MKM) can predict the RBE value of heavy ions with saturation-corrected dose-mean specific energy, which has been used in clinical treatment planning at the National Institute of Radiological Sciences. In the theoretical assumption of the MKM, the yield of the primary lesion is independent of the radiation quality, while the experimental data shows that DNA double strand break (DSB) yield, considered as the main primary lesion, depends on the LET of the particle. Besides, the β parameter of the MKM is constant with LET resulting from this assumption, which also differs from the experimental conclusion. In this study, a modified MKM was developed, named MMKM. Based on the experimental DSB yield of mammalian cells under the irradiation of ions with different LETs, a RBEDSB (RBE for the induction of DSB)-LET curve was fitted as the correction factor to modify the primary lesion yield in the MKM, and the variation of the primary lesion yield with LET is considered in the MMKM. Compared with the present the MKM, not only the α parameter of the MMKM for mono-energetic ions agree with the experimental data, but also the β parameter varies with LET and the variation trend of the experimental result can be reproduced on the whole. Then a spread-out Bragg peaks (SOBP) distribution of physical dose was simulated with Geant4 Monte Carlo code, and the biological and clinical dose distributions were calculated, under the irradiation of carbon ions. The results show that the distribution of clinical dose calculated with the MMKM is closed to the distribution with the MKM in the SOBP, while the discrepancy before and after the SOBP are both within 10%. Moreover, the MKM might overestimate the clinical dose at the distal end of the SOBP more than 5% because of its

  4. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    Science.gov (United States)

    Kaplan, Richard F; Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Bootzin, Richard R

    2014-01-01

    Background A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG) used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system) against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years), including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2) were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset) were compared between the Z-ALG output and the technologist consensus score files. Results Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the negative predictive value for detecting sleep are 98.0% and 84.2%, respectively. Overall κ agreement is 0.85 (approaching the level of agreement observed among sleep technologists). These results persist when the sleep disorder subgroups are analyzed separately. Conclusion This study demonstrates that the Z-ALG automated sleep

  5. Removal of Muscle Artifacts from Single-Channel EEG Based on Ensemble Empirical Mode Decomposition and Multiset Canonical Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Xun Chen

    2014-01-01

    Full Text Available Electroencephalogram (EEG recordings are often contaminated with muscle artifacts. This disturbing muscular activity strongly affects the visual analysis of EEG and impairs the results of EEG signal processing such as brain connectivity analysis. If multichannel EEG recordings are available, then there exist a considerable range of methods which can remove or to some extent suppress the distorting effect of such artifacts. Yet to our knowledge, there is no existing means to remove muscle artifacts from single-channel EEG recordings. Moreover, considering the recently increasing need for biomedical signal processing in ambulatory situations, it is crucially important to develop single-channel techniques. In this work, we propose a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD with multiset canonical correlation analysis (MCCA. We demonstrate the performance of the proposed method through numerical simulations and application to real EEG recordings contaminated with muscle artifacts. The proposed method can successfully remove muscle artifacts without altering the recorded underlying EEG activity. It is a promising tool for real-world biomedical signal processing applications.

  6. Basic theory and model calculations of the Venus ionosphere

    Science.gov (United States)

    Nagy, A. F.; Cravens, T. E.; Gombosi, T. I.

    1983-01-01

    An assessment is undertaken of current understanding of the physical and chemical processes that control Venus's ionospheric behavior, in view of the data that has been made available by the Venera and Pioneer Venus missions. Attention is given to the theoretical framework used in general planetary ionosphere studies, especially to the equations describing the controlling physical and chemical processes, and to the current status of the ion composition, density and thermal structure models developed to reproduce observed ionospheric behavior. No truly comprehensive and successful model of the nightside ionosphere has been published. Furthermore, although dayside energy balance calculations yield electron and ion temperature values that are in close agreement with measured values, the energetics of the night side eludes understanding.

  7. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  8. Recent Developments in No-Core Shell-Model Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R

    2009-03-20

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  9. Modeling and calculation of open carbon dioxide refrigeration system

    International Nuclear Information System (INIS)

    Cai, Yufei; Zhu, Chunling; Jiang, Yanlong; Shi, Hong

    2015-01-01

    Highlights: • A model of open refrigeration system is developed. • The state of CO 2 has great effect on Refrigeration capacity loss by heat transfer. • Refrigeration capacity loss by remaining CO 2 has little relation to the state of CO 2 . • Calculation results are in agreement with the test results. - Abstract: Based on the analysis of the properties of carbon dioxide, an open carbon dioxide refrigeration system is proposed, which is responsible for the situation without external electricity unit. A model of open refrigeration system is developed, and the relationship between the storage environment of carbon dioxide and refrigeration capacity is conducted. Meanwhile, a test platform is developed to simulation the performance of the open carbon dioxide refrigeration system. By comparing the theoretical calculations and the experimental results, several conclusions are obtained as follows: refrigeration capacity loss by heat transfer in supercritical state is much more than that in two-phase region and the refrigeration capacity loss by remaining carbon dioxide has little relation to the state of carbon dioxide. The results will be helpful to the use of open carbon dioxide refrigeration

  10. Improved perturbative calculations in field theory; Calculation of the mass spectrum and constraints on the supersymmetric standard model; Calculs perturbatifs variationnellement ameliores en theorie des champs; Calcul du spectre et contraintes sur le modele supersymetrique standard

    Energy Technology Data Exchange (ETDEWEB)

    Kneur, J.L

    2006-06-15

    This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.

  11. Development of nuclear models for higher energy calculations

    International Nuclear Information System (INIS)

    Bozoian, M.; Siciliano, E.R.; Smith, R.D.

    1988-01-01

    Two nuclear models for higher energy calculations have been developed in the regions of high and low energy transfer, respectively. In the former, a relativistic hybrid-type preequilibrium model is compared with data ranging from 60 to 800 MeV. Also, the GNASH exciton preequilibrium-model code with higher energy improvements is compared with data at 200 and 318 MeV. In the region of low energy transfer, nucleon-nucleus scattering is predominately a direct reaction involving quasi-elastic collisions with one or more target nucleons. We discuss various aspects of quasi-elastic scattering which are important in understanding features of cross sections and spin observables. These include (1) contributions from multi-step processes; (2) damping of the continuum response from 2p-2h excitations; (3) the ''optimal'' choice of frame in which to evaluate the nucleon-nucleon amplitudes; and (4) the effect of optical and spin-orbit distortions, which are included in a model based on the RPA the DWIA and the eikonal approximation. 33 refs., 15 figs

  12. Extracting time-frequency feature of single-channel vastus medialis EMG signals for knee exercise pattern recognition.

    Science.gov (United States)

    Zhang, Yi; Li, Peiyang; Zhu, Xuyang; Su, Steven W; Guo, Qing; Xu, Peng; Yao, Dezhong

    2017-01-01

    The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing), hip extension from a sitting position (sitting) and gait (walking) are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT) based Singular Value Decomposition (SVD) approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM) is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV), Root-Mean-Square (RMS), integrated EMG (iEMG), Zero Crossing (ZC)) and frequency-domain (e.g., Mean Frequency (MNF) and Median Frequency (MDF)) are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85%±0.88% which

  13. Extracting time-frequency feature of single-channel vastus medialis EMG signals for knee exercise pattern recognition.

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    Full Text Available The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing, hip extension from a sitting position (sitting and gait (walking are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT based Singular Value Decomposition (SVD approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV, Root-Mean-Square (RMS, integrated EMG (iEMG, Zero Crossing (ZC and frequency-domain (e.g., Mean Frequency (MNF and Median Frequency (MDF are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85%±0

  14. Quantum plasmonics: from jellium models to ab initio calculations

    Directory of Open Access Journals (Sweden)

    Varas Alejandro

    2016-08-01

    Full Text Available Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.

  15. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  16. Calculational models of close-spaced thermionic converters

    International Nuclear Information System (INIS)

    McVey, J.B.

    1983-01-01

    Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives

  17. Calculation of extreme wind atlases using mesoscale modeling. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, X.G..; Badger, J.

    2012-06-15

    The objective of this project is to develop new methodologies for extreme wind atlases using mesoscale modeling. Three independent methodologies have been developed. All three methodologies are targeted at confronting and solving the problems and drawbacks in existing methods for extreme wind estimation regarding the use of modeled data (coarse resolution, limited representation of storms) and measurements (short period and technical issues). The first methodology is called the selective dynamical downscaling method. For a chosen area, we identify the yearly strongest storms through global reanalysis data at each model grid point and run a mesoscale model, here the Weather Research and Forecasting (WRF) model, for all storms identified. Annual maximum winds and corresponding directions from each mesoscale grid point are then collected, post-processed and used in Gumbel distribution to obtain the 50-year wind. The second methodology is called the statistical-dynamical downscaling method. For a chosen area, the geostrophic winds at a representative grid point from the global reanalysis data are used to obtain the annual maximum winds in 12 sectors for a period of 30 years. This results in 360 extreme geostrophic winds. Each of the 360 winds is used as a stationary forcing in a mesoscale model, here KAMM. For each mesoscale grid point the annual maximum winds are post-processed and used to a Gumbel fit to obtain the 50-year wind. For the above two methods, the post-processing is an essential part. It calculates the speedup effects using a linear computation model (LINCOM) and corrects the winds from the mesoscale modeling to a standard condition, i.e. 10 m above a homogeneous surface with a roughness length 5 cm. Winds of the standard condition can then be put into a microscale model to resolve the local terrain and roughness effects around particular turbine sites. By converting both the measured and modeled winds to the same surface conditions through the post

  18. Calculating ε'/ε in the standard model

    International Nuclear Information System (INIS)

    Sharpe, S.R.

    1988-01-01

    The ingredients needed in order to calculate ε' and ε are described. Particular emphasis is given to the non-perturbative calculations of matrix elements by lattice methods. The status of the electromagnetic contribution to ε' is reviewed. 15 refs

  19. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  20. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  1. Visual CRO display of pulse height distribution including discriminator setting for a single channel X-ray analyser

    International Nuclear Information System (INIS)

    Shaw, S.E.

    1979-01-01

    An outline for a simple pulse spectroscope which attaches to a standard laboratory CRO is presented. The peak amplitude voltage of each pulse from the linear amplifier of a single channel X-ray analyser is stored for the duration of one oscilloscope trace. For each amplifier pulse, input from the discriminator is tested and if these is coincidence of pulses the oscilloscope beam is blanked for approximately the first 2 cm of its traverse across the screen. Repetition of pulses forms a pulse height distribution with a rectangular dark area marking the position of the discriminator window. (author)

  2. Changes in I K, ACh single-channel activity with atrial tachycardia remodelling in canine atrial cardiomyocytes.

    Science.gov (United States)

    Voigt, Niels; Maguy, Ange; Yeh, Yung-Hsin; Qi, Xiaoyan; Ravens, Ursula; Dobrev, Dobromir; Nattel, Stanley

    2008-01-01

    Although atrial tachycardia (AT) remodelling promotes agonist-independent, constitutively active, acetylcholine-regulated K+-current (I K,ACh) that increases susceptibility to atrial fibrillation (AF), the underlying changes in I K,Ach channel function are unknown. This study aimed to establish how AT remodelling affects I K,ACh single-channel function. I K,ACh single-channel activity was studied via cell-attached patch-clamp in isolated left atrial cardiomyocytes of control and AT (7 days, 400 min(-1)) dogs. Atrial tachycardia prolonged the mean duration of induced AF from 44 +/- 22 to 413 +/- 167 s, and reduced atrial effective refractory period at a 360 ms cycle length from 126 +/- 3 to 74 +/- 5 ms (n = 9/group, P ACh conductance and rectification properties were sparse under control conditions. Atrial tachycardia induced prominent agonist-independent I K,ACh activity because of increased opening frequency (fo) and open probability (Po: approximately seven- and 10-fold, respectively, vs. control), but did not alter open time-constant, single-channel conductance, and membrane density. With maximum I K,ACh activation (10 micromol/L carbachol), channel Po was enhanced much more in control cells ( approximately 42-fold) than in AT-remodelled myocytes (approximately five-fold). The selective Kir3 current blocker tertiapin-Q (100 nmol/L) reduced fo and Po at -100 mV by 48 and 51%, respectively (P ACh. Atrial tachycardia had no significant effect on mRNA or protein expression of either of the subunits (Kir3.1, Kir3.4) underlying I K,ACh. Atrial tachycardia increases agonist-independent constitutive I K,ACh single-channel activity by enhancing spontaneous channel opening, providing a molecular basis for AT effects on macroscopic I K,ACh observed in previous studies, as well as associated refractoriness abbreviation and tertiapin-suppressible AF promotion. These results suggest an important role for constitutive I K,Ach channel opening in AT remodelling and support its

  3. SINGLE CHANNEL SEISMIC APPLICATION FOR GAS CHARGED SEDIMENT RECONNAISSANCE IN GEOHAZARD STUDY OF PORT CONSTRUCTION AT WETLAND AREA

    Directory of Open Access Journals (Sweden)

    Taufan Wiguna

    2016-10-01

    Full Text Available Gas charged sediment as a one of parameter for geohazard study in infrastructure especially in swamp area. Instability of sediment layer for exampel subsidence and landslide result in geohazard potention that caused by gas charged sediment. Seismic single channel can be used to identufy gas charged sediment location. Seabed morphology is collected from bathymetry and tidal survey. From seismic profile interpretation shows gas charged sediment indication in Line A and Line B. That indication emerged by presence of acoustic turbid zone and acoustic blanking. Line A and Line B location will be spotlight in next geotechnic port construction study.

  4. Full waveform modelling and misfit calculation using the VERCE platform

    Science.gov (United States)

    Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas

    2016-04-01

    simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.

  5. Zn2(+)-induced subconductance events in cardiac Na+ channels prolonged by batrachotoxin. Current-voltage behavior and single-channel kinetics

    Science.gov (United States)

    1991-01-01

    The mechanism of voltage-dependent substate production by external Zn2+ in batrachotoxin-modified Na+ channels from canine heart was investigated by analysis of the current-voltage behavior and single- channel kinetics of substate events. At the single-channel level the addition of external Zn2+ results in an increasing frequency of substate events with a mean duration of approximately 15-25 ms for the substate dwell time observed in the range of -70 to +70 mV. Under conditions of symmetrical 0.2 M NaCl, the open state of cardiac Na+ channels displays ohmic current-voltage behavior in the range of -90 to +100 mV, with a slope conductance of 21 pS. In contrast, the Zn2(+)- induced substate exhibits significant outward rectification with a slope conductance of 3.1 pS in the range of -100 to -50 mV and 5.1 pS in the range of +50 to +100 mV. Analysis of dwell-time histograms of substate events as a function of Zn2+ concentration and voltage led to the consideration of two types of models that may explain this behavior. Using a simple one-site blocking model, the apparent association rate for Zn2+ binding is more strongly voltage dependent (decreasing e-fold per +60 mV) than the Zn2+ dissociation rate (increasing e-fold per +420 mV). However, this simple blocking model cannot account for the dependence of the apparent dissociation rate on Zn2+ concentration. To explain this result, a four-state kinetic scheme involving a Zn2(+)-induced conformational change from a high conductance conformation to a substate conformation is proposed. This model, similar to one introduced by Pietrobon et al. (1989. J. Gen. Physiol. 94:1-24) for H(+)-induced substate behavior in L-type Ca2+ channels, is able to simulate the kinetic and equilibrium behavior of the primary Zn2(+)-induced substate process in heart Na+ channels. This model implies that binding of Zn2+ greatly enhances conversion of the open, ohmic channel to a low conductance conformation with an asymmetric energy profile for

  6. Improvements in the model of neutron calculations for research reactors

    International Nuclear Information System (INIS)

    Calzetta, O.; Leszczynski, F.

    1987-01-01

    Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results, are being researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements by means of one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author)

  7. Improvements in the model of neutron calculations for research reactors

    International Nuclear Information System (INIS)

    Calzetta, Osvaldo; Leszczynski, Francisco

    1987-01-01

    Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results are researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements, by one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author) [es

  8. 40 CFR 600.207-93 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... Values § 600.207-93 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... those intended for sale in other states, he will calculate fuel economy values for each model type for...

  9. Calculational advance in the modeling of fuel-coolant interactions

    International Nuclear Information System (INIS)

    Bohl, W.R.

    1982-01-01

    A new technique is applied to numerically simulate a fuel-coolant interaction. The technique is based on the ability to calculate separate space- and time-dependent velocities for each of the participating components. In the limiting case of a vapor explosion, this framework allows calculation of the pre-mixing phase of film boiling and interpenetration of the working fluid by hot liquid, which is required for extrapolating from experiments to a reactor hypothetical accident. Qualitative results are compared favorably to published experimental data where an iron-alumina mixture was poured into water. Differing results are predicted with LMFBR materials

  10. Comparison of Calculation Models for Bucket Foundation in Sand

    DEFF Research Database (Denmark)

    Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo

    The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focuse...

  11. National Stormwater Calculator - Version 1.1 (Model)

    Science.gov (United States)

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...

  12. An Optimized Design of Single-Channel Beta-Gamma Coincidence Phoswich Detector by Geant4 Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2011-01-01

    Full Text Available An optimized single-channel phoswich well detector design has been proposed and assessed in order to improve beta-gamma coincidence measurement sensitivity of xenon radioisotopes. This newly designed phoswich well detector consists of a plastic beta counting cell (BC404 embedded in a CsI(Tl crystal coupled to a photomultiplier tube. The BC404 is configured in a cylindrical pipe shape to minimise light collection deterioration. The CsI(Tl crystal consists of a rectangular part and a semicylindrical scintillation part as a light reflector to increase light gathering. Compared with a PhosWatch detector, the final optimized detector geometry showed 15% improvement in the energy resolution of a 131mXe 129.4 keV conversion electron peak. The predicted beta-gamma coincidence efficiencies of xenon radioisotopes have also been improved accordingly.

  13. Joint synthetic aperture radar plus ground moving target indicator from single-channel radar using compressive sensing

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas; Hallquist, Aaron; Anderson, Hyrum

    2017-10-17

    The various embodiments presented herein relate to utilizing an operational single-channel radar to collect and process synthetic aperture radar (SAR) and ground moving target indicator (GMTI) imagery from a same set of radar returns. In an embodiment, data is collected by randomly staggering a slow-time pulse repetition interval (PRI) over a SAR aperture such that a number of transmitted pulses in the SAR aperture is preserved with respect to standard SAR, but many of the pulses are spaced very closely enabling movers (e.g., targets) to be resolved, wherein a relative velocity of the movers places them outside of the SAR ground patch. The various embodiments of image reconstruction can be based on compressed sensing inversion from undersampled data, which can be solved efficiently using such techniques as Bregman iteration. The various embodiments enable high-quality SAR reconstruction, and high-quality GMTI reconstruction from the same set of radar returns.

  14. Design and Construction of an Autonomous Low-Cost Pulse Height Analyzer and a Single Channel Analyzer for Moessbauer Spectroscopy

    International Nuclear Information System (INIS)

    Velasquez, A.A.; Trujillo, J.M.; Morales, A.L.; Tobon, J.E.; Gancedo, J.R.; Reyes, L.

    2005-01-01

    A multichannel analyzer (MCA) and a single channel-analyzer (SCA) for Moessbauer spectrometry application have been designed and built. Both systems include low-cost digital and analog components. A microcontroller manages, either in PHA or MCS mode, the data acquisition, data storage and setting of the pulse discriminator limits. The user can monitor the system from an external PC through the serial port with the RS232 communication protocol. A graphic interface made with the LabVIEW software allows the user to adjust digitally the lower and upper limits of the pulse discriminator, and to visualize as well as save the PHA spectra in a file. The system has been tested using a 57Co radioactive source and several iron compounds, yielding satisfactory results. The low cost of its design, construction and maintenance make this equipment an attractive choice when assembling a Moessbauer spectrometer

  15. 3-lead acquisition using single channel ECG device developed on AD8232 analog front end for wireless ECG application

    Science.gov (United States)

    Agung, Mochammad Anugrah; Basari

    2017-02-01

    Electrocardiogram (ECG) devices measure electrical activity of the heart muscle to determine heart conditions. ECG signal quality is the key factor in determining the diseases of the heart. This paper presents the design of 3-lead acquistion on single channel wireless ECG device developed on AD8232 chip platform using microcontroller. To make the system different from others, monopole antenna 2.4 GHz is used in order to send and receive ECG signal. The results show that the system still can receive ECG signal up to 15 meters by line of sight (LOS) condition. The shape of ECG signals is precisely similar with the expected signal, although some delays occur between two consecutive pulses. For further step, the system will be applied with on-body antenna in order to investigate body to body communication that will give variation in connectivity from the others.

  16. Absolute determination of zero-energy phase shifts for multiparticle single-channel scattering: Generalized Levinson theorem

    International Nuclear Information System (INIS)

    Rosenberg, L.; Spruch, L.

    1996-01-01

    Levinson close-quote s theorem relates the zero-energy phase shift δ for potential scattering in a given partial wave l, by a spherically symmetric potential that falls off sufficiently rapidly, to the number of bound states of that l supported by the potential. An extension of this theorem is presented that applies to single-channel scattering by a compound system initially in its ground state. As suggested by Swan [Proc. R. Soc. London Ser. A 228, 10 (1955)], the extended theorem differs from that derived for potential scattering; even in the absence of composite bound states δ may differ from zero as a consequence of the Pauli principle. The derivation given here is based on the introduction of a continuous auxiliary open-quote open-quote length phase close-quote close-quote η, defined modulo π for l=0 by expressing the scattering length as A=acotη, where a is a characteristic length of the target. Application of the minimum principle for the scattering length determines the branch of the cotangent curve on which η lies and, by relating η to δ, an absolute determination of δ is made. The theorem is applicable, in principle, to single-channel scattering in any partial wave for e ± -atom and nucleon-nucleus systems. In addition to a knowledge of the number of composite bound states, information (which can be rather incomplete) concerning the structure of the target ground-state wave function is required for an explicit, absolute, determination of the phase shift δ. As for Levinson close-quote s original theorem for potential scattering, no additional information concerning the scattering wave function or scattering dynamics is required. copyright 1996 The American Physical Society

  17. Perturbation theory calculations of model pair potential systems

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Jianwu [Iowa State Univ., Ames, IA (United States)

    2016-01-01

    Helmholtz free energy is one of the most important thermodynamic properties for condensed matter systems. It is closely related to other thermodynamic properties such as chemical potential and compressibility. It is also the starting point for studies of interfacial properties and phase coexistence if free energies of different phases can be obtained. In this thesis, we will use an approach based on the Weeks-Chandler-Anderson (WCA) perturbation theory to calculate the free energy of both solid and liquid phases of Lennard-Jones pair potential systems and the free energy of liquid states of Yukawa pair potentials. Our results indicate that the perturbation theory provides an accurate approach to the free energy calculations of liquid and solid phases based upon comparisons with results from molecular dynamics (MD) and Monte Carlo (MC) simulations.

  18. Performance Calculations - and Appendix I - Model XC-120 (M-107)

    Science.gov (United States)

    1950-09-25

    and cargo and& point. Drop nack and return to bass. Take-off cargo Fnd return to Cross weight defined at base without pack. Takel halfway point. off...Steciolonditions or Standard Airaraft Chearsoterim tios Performance pressented herein ir. tha~t requiredby roferernee ()for Standard. Airaraft...horsepower available as used in the performance calculations of this report in defined an: THP : ) ) -• re : BiP = engine brake horsepower from engine

  19. Calculation of single chain cellulose elasticity using fully atomistic modeling

    Science.gov (United States)

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2011-01-01

    Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...

  20. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    The simulation models for groundwater flowing to horizontal seepage wells proposed by Wang and Zhang (2007) are based on the theory of coupled seepage-pipe flow model which treats the well pipe as a highly permeable medium. However, the limitations of the existing model were found during applications. Specifically ...

  1. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  2. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  3. Comparison of the performance of net radiation calculation models

    DEFF Research Database (Denmark)

    Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.

    2009-01-01

    Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily....... The performance of the empirical models was nearly identical at all sites. Since the empirical models were easier to use and simpler to calibrate than the physically based models, the results indicate that the empirical models can be used as a good substitute for the physically based ones when available...

  4. CLEAR (Calculates Logical Evacuation And Response): A generic transportation network model for the calculation of evacuation time estimates

    International Nuclear Information System (INIS)

    Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

  5. Gothic simulation of single-channel fuel heatup following a loss of forced flow

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X-Q; Tahir, A. [NSS, Dept. of Thermal Hydraulics Analysis, Toronto, Ontario (Canada); Parlatan, Y. [Ontario Power Generation, NSATD, Pickering, Ontario (Canada); Kwee, M. [Bruce Power, NSASD, Toronto, Ontario (Canada)

    2011-07-01

    GOTHIC v7.2 was used to develop a computer model for the simulation of 28- and 37-element fuel heat-up at a loss of forced flow. The model has accounted for the non-uniformity of both axial and radial power distributions along the fuel channel for a typical CANDU reactor. In addition, the model has also accounted for the fuel rods, end-fittings, feeders and headers. Experimental test conditions for both 28- and 37-element bundles at either low or high powers were used for model validation. GOTHIC predictions of the rod and/or pressure-tube temperatures at a variety of test locations were compared with the corresponding experimental measurements. It is found that the numerical results agree well with the experimental measurements for most of the test locations. Results have also shown that the channel venting time is sensitive to the initial temperature distribution in the feeders and headers. An imposed temperature asymmetry at the beginning will cause the channel flow to vent earlier. (author)

  6. Extraproximal approach to calculating equilibriums in pure exchange models

    Science.gov (United States)

    Antipin, A. S.

    2006-10-01

    Models of economic equilibrium are a powerful tool of mathematical modeling of various markets. However, according to many publications, there are as yet no universal techniques for finding equilibrium prices that are solutions to such models. A technique of this kind that is a natural implementation of the Walras idea of tatonnements (i.e., groping for equilibrium prices) is proposed, and its convergence is proved.

  7. Expanding of reactor power calculation model of RELAP5 code

    International Nuclear Information System (INIS)

    Lin Meng; Yang Yanhua; Chen Yuqing; Zhang Hong; Liu Dingming

    2007-01-01

    For better analyzing of the nuclear power transient in rod-controlled reactor core by RELAP5 code, a nuclear reactor thermal-hydraulic best-estimate system code, it is expected to get the nuclear power using not only the point neutron kinetics model but also one-dimension neutron kinetics model. Thus an existing one-dimension nuclear reactor physics code was modified, to couple its neutron kinetics model with the RELAP5 thermal-hydraulic model. The detailed example test proves that the coupling is valid and correct. (authors)

  8. A Monte Carlo model of complex spectra of opacity calculations

    International Nuclear Information System (INIS)

    Klapisch, M.; Duffy, P.; Goldstein, W.H.

    1991-01-01

    We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)

  9. Carbon dioxide fluid-flow modeling and injectivity calculations

    Science.gov (United States)

    Burke, Lauri

    2011-01-01

    At present, the literature lacks a geologic-based assessment methodology for numerically estimating injectivity, lateral migration, and subsequent long-term containment of supercritical carbon dioxide that has undergone geologic sequestration into subsurface formations. This study provides a method for and quantification of first-order approximations for the time scale of supercritical carbon dioxide lateral migration over a one-kilometer distance through a representative volume of rock. These calculations provide a quantified foundation for estimating injectivity and geologic storage of carbon dioxide.

  10. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding to in...... to individual instruments. Based on this factorization we separate the instruments using spectrogram masking. The proposed algorithm has applications in computational auditory scene analysis, music information retrieval, and automatic music transcription.......We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  11. Automated high-dose rate brachytherapy treatment planning for a single-channel vaginal cylinder applicator

    Science.gov (United States)

    Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun

    2017-06-01

    High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.

  12. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  13. Numerical calculation of path integrals : The small-polaron model

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1983-01-01

    The thermodynamic properties of the small-polaron model are studied by means of a discrete version of the Feynman path-integral representation of the partition function. This lattice model describes a fermion interacting with a boson field. The bosons are treated analytically, the fermion

  14. A review of Higgs mass calculations in supersymmetric models

    DEFF Research Database (Denmark)

    Draper, P.; Rzehak, H.

    2016-01-01

    The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...

  15. What do business models do? Narratives, calculation and market exploration

    OpenAIRE

    Liliana Doganova; Marie Renault

    2008-01-01

    http://www.csi.ensmp.fr/Items/WorkingPapers/Download/DLWP.php?wp=WP_CSI_012.pdf; CSI WORKING PAPERS SERIES 012; International audience; Building on a case study of an entrepreneurial venture, we investigate the role played by business models in the innovation process. Rather than debating their accuracy and efficiency, we adopt a pragmatic approach to business models -- we examine them as market devices, focusing on their materiality, use and dynamics. Taking into account the variety of its f...

  16. A simple model for calculating air pollution within street canyons

    Science.gov (United States)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  17. Linear Regression on Sparse Features for Single-Channel Speech Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard

    2007-01-01

    In this work we address the problem of separating multiple speakers from a single microphone recording. We formulate a linear regression model for estimating each speaker based on features derived from the mixture. The employed feature representation is a sparse, non-negative encoding of the speech...... compared to linear regression on spectral features and compared to separation based directly on the non-negative sparse features....... mixture in terms of pre-learned speaker-dependent dictionaries. Previous work has shown that this feature representation by itself provides some degree of separation. We show that the performance is significantly improved when regression analysis is performed on the sparse, non-negative features, both...

  18. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  19. An hydrodynamic model for the calculation of oil spills trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Paladino, Emilio Ernesto; Maliska, Clovis Raimundo [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Dinamica dos Fluidos Computacionais]. E-mails: emilio@sinmec.ufsc.br; maliska@sinmec.ufsc.br

    2000-07-01

    The aim of this paper is to present a mathematical model and its numerical treatment to forecast oil spills trajectories in the sea. The knowledge of the trajectory followed by an oil slick spilled on the sea is of fundamental importance in the estimation of potential risks for pipeline and tankers route selection, and in combating the pollution using floating barriers, detergents, etc. In order to estimate these slicks trajectories a new model, based on the mass and momentum conservation equations is presented. The model considers the spreading in the regimes when the inertial and viscous forces counterbalance gravity and takes into account the effects of winds and water currents. The inertial forces are considered for the spreading and the displacement of the oil slick, i.e., is considered its effects on the movement of the mass center of the slick. The mass loss caused by oil evaporation is also taken into account. The numerical model is developed in generalized coordinates, making the model easily applicable to complex coastal geographies. (author)

  20. Uncertain hybrid model for the response calculation of an alternator

    International Nuclear Information System (INIS)

    Kuczkowiak, Antoine

    2014-01-01

    The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)

  1. 40 CFR 600.207-86 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... Values § 600.207-86 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... the projected sales and fuel economy values for each base level within the model type. (1) If the...

  2. Model for calculation of concentration and load on behalf of accidents with radioactive materials

    International Nuclear Information System (INIS)

    Janssen, L.A.M.; Heugten, W.H.H. van

    1987-04-01

    In the project 'Information- and calculation-system for disaster combatment', by order of the Dutch government, a demonstration model has been developed for a diagnosis system for accidents. In this demonstration a model is used to calculate the concentration- and dose-distributions caused by incidental emissions of limited time. This model is described in this report. 4 refs.; 2 figs.; 3 tabs

  3. ddpcRquant: threshold determination for single channel droplet digital PCR experiments.

    Science.gov (United States)

    Trypsteen, Wim; Vynck, Matthijs; De Neve, Jan; Bonczkowski, Pawel; Kiselinova, Maja; Malatinkova, Eva; Vervisch, Karen; Thas, Olivier; Vandekerckhove, Linos; De Spiegelaere, Ward

    2015-07-01

    Digital PCR is rapidly gaining interest in the field of molecular biology for absolute quantification of nucleic acids. However, the first generation of platforms still needs careful validation and requires a specific methodology for data analysis to distinguish negative from positive signals by defining a threshold value. The currently described methods to assess droplet digital PCR (ddPCR) are based on an underlying assumption that the fluorescent signal of droplets is normally distributed. We show that this normality assumption does not likely hold true for most ddPCR runs, resulting in an erroneous threshold. We suggest a methodology that does not make any assumptions about the distribution of the fluorescence readouts. A threshold is estimated by modelling the extreme values in the negative droplet population using extreme value theory. Furthermore, the method takes shifts in baseline fluorescence between samples into account. An R implementation of our method is available, allowing automated threshold determination for absolute ddPCR quantification using a single fluorescent reporter.

  4. Resolving Difficulties of a Single-Channel Partial-Wave Analysis

    Science.gov (United States)

    Hunt, Brian; Manley, D. Mark

    2016-03-01

    The goal of our research is to determine better the properties of nucleon resonances using techniques of a global multichannel partial-wave analysis. Currently, many predicted resonances have not been found, while the properties of several known resonances are relatively uncertain. To resolve these issues, one must analyze many different reactions in a multichannel fit. Other groups generally approach this problem by generating an energy-dependent fit from the start. This is a fit where all channels are analyzed together. The method is powerful, but due to the complex nature of resonances, certain model-dependent assumptions have to be introduced from the start. The current work tries to resolve these issues by first generating single-energy solutions in which experimental data are analyzed in narrow energy bins. The single-energy solutions can then be used to constrain the energy-dependent solution in a comparatively unbiased manner. Our work focuses on adding three new single-energy solutions into the global fit. These reactions are γp --> ηp , γn --> ηn , and γp -->K+ Λ . During this talk, I will discuss the difficulties of this approach, our methods to overcome these difficulties, and a few preliminary results. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Medium Energy Nuclear Physics, under Award Nos. DE-FG02-01ER41194 and DE-SC0014323 and by the Kent State University Department of Physics.

  5. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    well pipe and aquifer couples the turbulent flow inside the horizontal seepage well with laminar flow in the aquifer. .... In the well pipe, the relationship between hydraulic head loss and flow velocity .... the steady-state mathematic model is developed for groundwater flowing to the horizontal seepage well under a river valley.

  6. Source data for modeling of thermal engineering calculations

    Directory of Open Access Journals (Sweden)

    Charvátová Pavlína

    2018-01-01

    Full Text Available Increasing demands on thermal insulation. Their more accurate assessment by computers lead to increasingly bigger differences between computational models and reality. The result is an increasingly problematic optimization of building design. One of the key initial parameters is climatological data.

  7. A calculation model for a HTR core seismic response

    International Nuclear Information System (INIS)

    Buland, P.; Berriaud, C.; Cebe, E.; Livolant, M.

    1975-01-01

    The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de

  8. Calculation of benchmarks with a shear beam model

    NARCIS (Netherlands)

    Hendriks, M.A.N.; Boer, A.; Rots, J.G.; Ferreira, D.

    2015-01-01

    Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account

  9. Reactor accident calculation models in use in the Nordic countries

    International Nuclear Information System (INIS)

    Tveten, U.

    1984-01-01

    The report relates to a subproject under a Nordic project called ''Large reactor accidents - consequences and mitigating actions''. In the first part of the report short descriptions of the various models are given. A systematic list by subject is then given. In the main body of the report chapter and subchapter headings are by subject. (Auth.)

  10. Semiclassical calculation for collision induced dissociation. II. Morse oscillator model

    International Nuclear Information System (INIS)

    Rusinek, I.; Roberts, R.E.

    1978-01-01

    A recently developed semiclassical procedure for calculating collision induced dissociation probabilities P/sup diss/ is applied to the collinear collision between a particle and a Morse oscillator diatomic. The particle--diatom interaction is described with a repulsive exponential potential function. P/sup diss/ is reported for a system of three identical particles, as a function of collision energy E/sub t/ and initial vibrational state of the diatomic n 1 . The results are compared with the previously reported values for the collision between a particle and a truncated harmonic oscillator. The two studies show similar features, namely: (a) there is an oscillatory structure in the P/sup diss/ energy profiles, which is directly related to n 1 ; (b) P/sup diss/ becomes noticeable (> or approx. =10 -3 ) for E/sub t/ values appreciably higher than the energetic threshold; (c) vibrational enhancement (inhibition) of collision induced dissociation persists at low (high) energies; and (d) good agreement between the classical and semiclassical results is found above the classical dynamic threshold. Finally, the convergence of P/sup diss/ for increasing box length is shown to be rapid and satisfactory

  11. Approximate models for neutral particle transport calculations in ducts

    International Nuclear Information System (INIS)

    Ono, Shizuca

    2000-01-01

    The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)

  12. Generic model for calculating carbon footprint of milk using four different LCA modelling approaches

    DEFF Research Database (Denmark)

    Dalgaard, Randi; Schmidt, Jannick Højrup; Flysjö, Anna

    2014-01-01

    is LCA. The model includes switches that enables for, within the same scope, transforming the results to comply with 1) consequential LCA, 2) allocation/average modelling (or ‘attributional LCA’), 3) PAS 2050 and 4) The International Dairy Federations (IDF) guide to standard life cycle assessment......The aim of the study is to develop a tool, which can be used for calculation of carbon footprint (using a life cycle assessment (LCA) approach) of milk both at a farm level and at a national level. The functional unit is ‘1 kg energy corrected milk (ECM) at farm gate’ and the applied methodology...

  13. The curvature calculation mechanism based on simple cell model.

    Science.gov (United States)

    Yu, Haiyang; Fan, Xingyu; Song, Aiqi

    2017-07-20

    A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.

  14. Accurate modeling of defects in graphene transport calculations

    Science.gov (United States)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  15. Road-Aided Ground Slowly Moving Target 2D Motion Estimation for Single-Channel Synthetic Aperture Radar

    Directory of Open Access Journals (Sweden)

    Zhirui Wang

    2016-03-01

    Full Text Available To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR, a road-aided ground moving target indication (GMTI algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target’s position on the road as well as its radial velocity can be determined according to the target’s offset distance and traffic rules. Furthermore, the target’s azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.

  16. Design of a Single Channel Modulated Wideband Converter for Wideband Spectrum Sensing: Theory, Architecture and Hardware Implementation.

    Science.gov (United States)

    Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao

    2017-05-04

    In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today's best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel's consumption of MWC.

  17. Single-channel 40 Gbit/s digital coherent QAM quantum noise stream cipher transmission over 480 km.

    Science.gov (United States)

    Yoshida, Masato; Hirooka, Toshihiko; Kasai, Keisuke; Nakazawa, Masataka

    2016-01-11

    We demonstrate the first 40 Gbit/s single-channel polarization-multiplexed, 5 Gsymbol/s, 16 QAM quantum noise stream cipher (QNSC) transmission over 480 km by incorporating ASE quantum noise from EDFAs as well as the quantum shot noise of the coherent state with multiple photons for the random masking of data. By using a multi-bit encoded scheme and digital coherent transmission techniques, secure optical communication with a record data capacity and transmission distance has been successfully realized. In this system, the signal level received by Eve is hidden by both the amplitude and the phase noise. The highest number of masked signals, 7.5 x 10(4), was achieved by using a QAM scheme with FEC, which makes it possible to reduce the output power from the transmitter while maintaining an error free condition for Bob. We have newly measured the noise distribution around I and Q encrypted data and shown experimentally with a data size of as large as 2(25) that the noise has a Gaussian distribution with no correlations. This distribution is suitable for the random masking of data.

  18. An ultrasensitive squamous cell carcinoma antigen biosensing platform utilizing double-antibody single-channel amplification strategy.

    Science.gov (United States)

    Ren, Xiang; Wu, Dan; Wang, Yuhuan; Zhang, Yunhui; Fan, Dawei; Pang, Xuehui; Li, Yueyun; Du, Bin; Wei, Qin

    2015-10-15

    A novel electrochemical immunosensor was developed for ultrasensitive detection of squamous cell carcinoma antigen (SCCA), which was based on the double-antibody single-channel amplification strategy. For the first time, human immunoglobulin antibody (anti-HIgG) was used as the supporting framework to amplify the loading quantity of SCCA antibody (anti-SCCA). In this strategy, SCCA can be detected without using mesoporous nanometers to amplify the signal. In addition, Pd icosahedrons were first used as the connecter to immobilize the antibodies and strengthen the sensitivity. Only one touch point exists under the limited condition between a sphere and another shape in geometry, thus the Pd icosahedron is an excellent candidate as the role of connecter. Gold nanoparticles (Au NPs) decorated with mercapto-functionalized graphene sheets (Au@GS) were synthesized as the transducing materials. The fabricated immunosensor exhibited an excellent detection limit of 2.8 pg/mL and wide linear range of 0.01-5 ng/mL. This kind of immunosensor would provide a potential application in clinical diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Initial Results of Accelerated Stress Testing on Single-Channel and Multichannel Drivers: Solid-State Lighting Technology Area

    Energy Technology Data Exchange (ETDEWEB)

    None

    2018-02-28

    This report is the first in a series of studies on accelerated stress testing (AST) of drivers used for SSL luminaires, such as downlights, troffers, and streetlights. A representative group of two-stage commercial driver products was exposed to an AST environment consisting of 75°C and 75% relative humidity (7575). These drivers were a mix of single-channel drivers (i.e., a single output current for one LED primary) and multichannel drivers (i.e., separate output currents for multiple LED primaries). This AST environment was chosen because previous testing on downlights with integrated drivers demonstrated that 38% of the sample population failed in less than 2,500 hours of testing using this method. In addition to AST test results, the performance of an SSL downlight product incorporating an integrated, multichannel driver during extended room temperature operational life (RTOL) testing is also reported. A battery of measurements was used to evaluate these products during accelerated testing, including full electrical characterization (i.e., power consumption, PF, total harmonic distortion [THD], and inrush current) and photometric characterization of external LED loads attached to the drivers (i.e., flicker performance and lumen maintenance).

  20. Single-channel EEG sleep stage classification based on a streamlined set of statistical features in wavelet domain.

    Science.gov (United States)

    da Silveira, Thiago L T; Kozakevicius, Alice J; Rodrigues, Cesar R

    2017-02-01

    The main objective of this study was to enhance the performance of sleep stage classification using single-channel electroencephalograms (EEGs), which are highly desirable for many emerging technologies, such as telemedicine and home care. The proposed method consists of decomposing EEGs by a discrete wavelet transform and computing the kurtosis, skewness and variance of its coefficients at selected levels. A random forest predictor is trained to classify each epoch into one of the Rechtschaffen and Kales' stages. By performing a comprehensive set of tests on 106,376 epochs available from the Physionet public database, it is demonstrated that the use of these three statistical moments has enhanced performance when compared to their application in the time domain. Furthermore, the chosen set of features has the advantage of exhibiting a stable classification performance for all scoring systems, i.e., from 2- to 6-state sleep stages. The stability of the feature set is confirmed with ReliefF tests which show a performance reduction when any individual feature is removed, suggesting that this group of feature cannot be further reduced. The accuracies and kappa coefficients yield higher than 90 % and 0.8, respectively, for all of the 2- to 6-state sleep stage classification cases.

  1. User Guide for GoldSim Model to Calculate PA/CA Doses and Limits

    International Nuclear Information System (INIS)

    Smith, F.

    2016-01-01

    A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 ''Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site''.

  2. User Guide for GoldSim Model to Calculate PA/CA Doses and Limits

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-10-31

    A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.

  3. Investigation of a model to verify software for 3-D static force calculation

    OpenAIRE

    Takahashi, Norio; Nakata, Takayoshi; Morishige, H.

    1994-01-01

    Requirements for a model to verify software for 3-D static force calculation are examined, and a 3-D model for static force calculation is proposed. Some factors affecting the analysis and experiments are investigated in order to obtain accurate and reproducible results

  4. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    Science.gov (United States)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  5. Cost calculation model concerning small-scale production of chips and split firewood

    International Nuclear Information System (INIS)

    Ryynaenen, S.; Naett, H.; Valkonen, J.

    1995-01-01

    The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)

  6. Inverse calculation of biochemical oxygen demand models based on time domain for the tidal Foshan River.

    Science.gov (United States)

    Er, Li; Xiangying, Zeng

    2014-01-01

    To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.

  7. Reference Models of Information Systems Constructed with the use of Technologies of Cloud Calculations

    Directory of Open Access Journals (Sweden)

    Darya Sergeevna Simonenkova

    2013-09-01

    Full Text Available The subject of the research is analysis of various models of the information system constructed with the use of technologies of cloud calculations. Analysis of models is required for constructing a new reference model which will be used for develop a security threats model.

  8. Non-Stationary Single-Channel Queuing System Features Research in Context of Number of Served Queries

    Directory of Open Access Journals (Sweden)

    Porshnev Sergey

    2017-01-01

    Full Text Available This work devoted to researching of mathematical model of non-stationary queuing system (NQS. Arrival rate in studied NQS λ(t is similar to rate which observed in practice in a real access control system of objects of mass events. Dependence of number of serviced requests from time was calculated. It is proven that the ratio value of served requests at the beginning of event to all served requests described by a deterministic function, depending on the average service rate μ¯$\\bar \\mu $ and the maximum value of the arrival rate function λ(t.

  9. Efficacy of home single-channel nasal pressure for recommending continuous positive airway pressure treatment in sleep apnea.

    Science.gov (United States)

    Masa, Juan F; Duran-Cantolla, Joaquin; Capote, Francisco; Cabello, Marta; Abad, Jorge; Garcia-Rio, Francisco; Ferrer, Antoni; Fortuna, Ana M; Gonzalez-Mangado, Nicolas; de la Peña, Monica; Aizpuru, Felipe; Barbe, Ferran; Montserrat, Jose M

    2015-01-01

    Unlike other prevalent diseases, obstructive sleep apnea (OSA) has no simple tool for diagnosis and therapeutic decision-making in primary healthcare. Home single-channel nasal pressure (HNP) may be an alternative to polysomnography for diagnosis but its use in therapeutic decisions has yet to be explored. To ascertain whether an automatically scored HNP apnea-hypopnea index (AHI), used alone to recommend continuous positive airway pressure (CPAP) treatment, agrees with decisions made by a specialist using polysomnography and several clinical variables. Patients referred by primary care physicians for OSA suspicion underwent randomized polysomnography and HNP. We analyzed the total sample and both more and less symptomatic subgroups for Bland and Altman plots to explore AHI agreement; receiver operating characteristic curves to establish area under the curve (AUC) measurements for CPAP recommendation; and therapeutic decision efficacy for several HNP AHI cutoff points. Of the 787 randomized patients, 35 (4%) were lost, 378 (48%) formed the more symptomatic and 374 (48%) the less symptomatic subgroups. AHI bias and agreement limits were 5.8 ± 39.6 for the total sample, 5.3 ± 38.7 for the more symptomatic, and 6 ± 40.2 for the less symptomatic subgroups. The AUC were 0.826 for the total sample, 0.903 for the more symptomatic, and 0.772 for the less symptomatic subgroups. In the more symptomatic subgroup, 70% of patients could be correctly treated with CPAP. Automatic HNP scoring can correctly recommend CPAP treatment in most of more symptomatic patients with OSA suspicion. Our results suggest that this device may be an interesting tool in initial OSA management for primary care physicians, although future studies in a primary care setting are necessary. Clinicaltrial.gov identifier: NCT01347398. © 2014 Associated Professional Sleep Societies, LLC.

  10. Epileptic seizure classifications of single-channel scalp EEG data using wavelet-based features and SVM.

    Science.gov (United States)

    Janjarasjitt, Suparerk

    2017-10-01

    In this study, wavelet-based features of single-channel scalp EEGs recorded from subjects with intractable seizure are examined for epileptic seizure classification. The wavelet-based features extracted from scalp EEGs are simply based on detail and approximation coefficients obtained from the discrete wavelet transform. Support vector machine (SVM), one of the most commonly used classifiers, is applied to classify vectors of wavelet-based features of scalp EEGs into either seizure or non-seizure class. In patient-based epileptic seizure classification, a training data set used to train SVM classifiers is composed of wavelet-based features of scalp EEGs corresponding to the first epileptic seizure event. Overall, the excellent performance on patient-dependent epileptic seizure classification is obtained with the average accuracy, sensitivity, and specificity of, respectively, 0.9687, 0.7299, and 0.9813. The vector composed of two wavelet-based features of scalp EEGs provide the best performance on patient-dependent epileptic seizure classification in most cases, i.e., 19 cases out of 24. The wavelet-based features corresponding to the 32-64, 8-16, and 4-8 Hz subbands of scalp EEGs are the mostly used features providing the best performance on patient-dependent classification. Furthermore, the performance on both patient-dependent and patient-independent epileptic seizure classifications are also validated using tenfold cross-validation. From the patient-independent epileptic seizure classification validated using tenfold cross-validation, it is shown that the best classification performance is achieved using the wavelet-based features corresponding to the 64-128 and 4-8 Hz subbands of scalp EEGs.

  11. Extraction of fetal ECG signal by an improved method using extended Kalman smoother framework from single channel abdominal ECG signal.

    Science.gov (United States)

    Panigrahy, D; Sahu, P K

    2017-03-01

    This paper proposes a five-stage based methodology to extract the fetal electrocardiogram (FECG) from the single channel abdominal ECG using differential evolution (DE) algorithm, extended Kalman smoother (EKS) and adaptive neuro fuzzy inference system (ANFIS) framework. The heart rate of the fetus can easily be detected after estimation of the fetal ECG signal. The abdominal ECG signal contains fetal ECG signal, maternal ECG component, and noise. To estimate the fetal ECG signal from the abdominal ECG signal, removal of the noise and the maternal ECG component presented in it is necessary. The pre-processing stage is used to remove the noise from the abdominal ECG signal. The EKS framework is used to estimate the maternal ECG signal from the abdominal ECG signal. The optimized parameters of the maternal ECG components are required to develop the state and measurement equation of the EKS framework. These optimized maternal ECG parameters are selected by the differential evolution algorithm. The relationship between the maternal ECG signal and the available maternal ECG component in the abdominal ECG signal is nonlinear. To estimate the actual maternal ECG component present in the abdominal ECG signal and also to recognize this nonlinear relationship the ANFIS is used. Inputs to the ANFIS framework are the output of EKS and the pre-processed abdominal ECG signal. The fetal ECG signal is computed by subtracting the output of ANFIS from the pre-processed abdominal ECG signal. Non-invasive fetal ECG database and set A of 2013 physionet/computing in cardiology challenge database (PCDB) are used for validation of the proposed methodology. The proposed methodology shows a sensitivity of 94.21%, accuracy of 90.66%, and positive predictive value of 96.05% from the non-invasive fetal ECG database. The proposed methodology also shows a sensitivity of 91.47%, accuracy of 84.89%, and positive predictive value of 92.18% from the set A of PCDB.

  12. Dynamic Phenylalanine Clamp Interactions Define Single-Channel Polypeptide Translocation through the Anthrax Toxin Protective Antigen Channel.

    Science.gov (United States)

    Ghosal, Koyel; Colby, Jennifer M; Das, Debasis; Joy, Stephen T; Arora, Paramjit S; Krantz, Bryan A

    2017-03-24

    Anthrax toxin is an intracellularly acting toxin where sufficient detail is known about the structure of its channel, allowing for molecular investigations of translocation. The toxin is composed of three proteins, protective antigen (PA), lethal factor (LF), and edema factor (EF). The toxin's translocon, PA, translocates the large enzymes, LF and EF, across the endosomal membrane into the host cell's cytosol. Polypeptide clamps located throughout the PA channel catalyze the translocation of LF and EF. Here, we show that the central peptide clamp, the ϕ clamp, is a dynamic site that governs the overall peptide translocation pathway. Single-channel translocations of a 10-residue, guest-host peptide revealed that there were four states when peptide interacted with the channel. Two of the states had intermediate conductances of 10% and 50% of full conductance. With aromatic guest-host peptides, the 50% conducting intermediate oscillated with the fully blocked state. A Trp guest-host peptide was studied by manipulating its stereochemistry and prenucleating helix formation with a covalent linkage in the place of a hydrogen bond or hydrogen-bond surrogate (HBS). The Trp peptide synthesized with ʟ-amino acids translocated more efficiently than peptides synthesized with D- or alternating D,ʟ-amino acids. HBS stapled Trp peptide exhibited signs of steric hindrance and difficulty translocating. However, when mutant ϕ clamp (F427A) channels were tested, the HBS peptide translocated normally. Overall, peptide translocation is defined by dynamic interactions between the peptide and ϕ clamp. These dynamics require conformational flexibility, such that the peptide productively forms both extended-chain and helical states during translocation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Implementation of the neutronics model of HEXTRAN/HEXBU-3D into APROS for WWER calculations

    International Nuclear Information System (INIS)

    Rintala, J.

    2008-01-01

    A new three-dimensional nodal model for neutronics calculation is currently under implementation into APROS - Advanced PROcess Simulation environment - to conform the increasing accuracy requirements. The new model is based on an advanced nodal code HEXTRAN and its static version HEXBU-3D by VTT, Technical Research Centre of Finland. Currently the new APROS is under a testing programme. Later a systematic validation will be performed. In the first phase, a goal is to obtain a fully validated model for VVER-440 calculations. Thus, all the current test calculations are performed by using Loviisa NPP's VVER-440 model of APROS. In future, the model is planned to be applied for the calculations of VVER-1000 type reactors as well as in rectangular fuel geometry. The paper outlines first the general aspects of the method, and then the current situation of the implementation. Because of the identical model with the models of HEXTRAN and HEXBU-3D, the results in the test calculations are compared to the results of those. In the paper, results of two static test calculations are shown. Currently the model works well already in static analyses. Only minor problems with the control assemblies of VVER-440 type reactor still exist but the reasons are known and will be corrected in near future. Dynamical characteristics of the model are up to now tested only by some empirical tests. (author)

  14. Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model

    International Nuclear Information System (INIS)

    Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.

    2006-01-01

    The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments

  15. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  16. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  17. Calculation Method of Kinetic Constants for the Mathematical Model Peat Pyrolysis

    Directory of Open Access Journals (Sweden)

    Plakhova Tatyana

    2014-01-01

    Full Text Available Relevance of the work is related to necessity to simplify the calculation of kinetic constants for the mathematical model peat pyrolysis. Execute transformations of formula Arrhenius law. Degree of conversion is expressed in terms mass changes of sample. The obtained formulas help to calculate the kinetic constants for any type of solid organic fuels

  18. Modeling for Dose Rate Calculation of the External Exposure to Gamma Emitters in Soil

    International Nuclear Information System (INIS)

    Allam, K. A.; El-Mongy, S. A.; El-Tahawy, M. S.; Mohsen, M. A.

    2004-01-01

    Based on the model proposed and developed in Ph.D thesis of the first author of this work, the dose rate conversion factors (absorbed dose rate in air per specific activity of soil in nGy.hr - 1 per Bq.kg - 1) are calculated 1 m above the ground for photon emitters of natural radionuclides uniformly distributed in the soil. This new and simple dose rate calculation software was used for calculation of the dose rate in air 1 m above the ground. Then the results were compared with those obtained by five different groups. Although the developed model is extremely simple, the obtained results of calculations, based on this model, show excellent agreement with those obtained by the above-mentioned models specially that one adopted by UNSCEAR. (authors)

  19. Model to Calculate the Effectiveness of an Airborne Jammer on Analog Communications

    National Research Council Canada - National Science Library

    Vingson, Narciso A., Jr; Muhammad, Vaqar

    2005-01-01

    The objective of this study is to develop a statistical model to calculate the effectiveness of an airborne jammer on analog communication and broadcast receivers, such as AM and FM Broadcast Radio...

  20. On thermal vibration effects in diffusion model calculations of blocking dips

    International Nuclear Information System (INIS)

    Fuschini, E.; Ugozzoni, A.

    1983-01-01

    In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions

  1. Using Single-Channel Blind Deconvolution to Choose the Most Realistic Pharmacokinetic Model in Dynamic Contrast-Enhanced MR Imaging

    Czech Academy of Sciences Publication Activity Database

    Taxt, T.; Pavlin, T.; Reed, R. K.; Curry, F. R.; Andersen, E.; Jiřík, Radovan

    2015-01-01

    Roč. 46, č. 6 (2015), s. 643-659 ISSN 0937-9347 R&D Projects: GA ČR GAP102/12/2380; GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : blood-flow * magnetic resonance * kinetic parameters Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.884, year: 2015

  2. A real-time integrator of storage-area contents for SA 40B or DIDAC 800 analyzers. Use in the digital single-channel mode

    International Nuclear Information System (INIS)

    Rigaudiere, Roger; Daburon, M.-L.

    1976-09-01

    An apparatus was developed in order to sum up, during counting, the channel contents from several storage areas of SA 40 B or DIDAC 800 multichannel analyzers. The pulse number stored in the energy bands interesting the operator are known and if necessary subsequent operation can be modified accordingly. Coupled with an autonomous amplitude encoder, this apparatus can be operated in the digital single-channel mode [fr

  3. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  4. Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Moller, Peter [Los Alamos National Laboratory; Wilson, William B [Los Alamos National Laboratory

    2008-01-01

    Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.

  5. Shell model calculations for the mass 18 nuclei in the sd-shell

    International Nuclear Information System (INIS)

    Hamoudi, A.

    1997-01-01

    A simple effective nucleon-nucleon interaction for shell model calculations in the sd-shell is derived from the Reid soft-core potential folded with two-body correlation functions which take account of the strong short-range repulsion and large tensor component in the Reid force. Calculations of binding energies and low-lying spectra are performed for the mass A=18 with T=0 and 1 nuclei using this interaction. The results of this shell model calculations show a reasonable agreement with experiment

  6. Nuclear model calculations below 200 MeV and evaluation prospects

    International Nuclear Information System (INIS)

    Koning, A.J.; Bersillon, O.; Delaroche, J.P.

    1994-08-01

    A computational method is outlined for the quantum-mechanical prediction of the whole double-differential energy spectrum. Cross sections as calculated with the code system MINGUS are presented for (n,xn) and (p,xn) reactions on 208 Pb and 209 Bi. Our approach involves a dispersive optical model, comprehensive discrete state calculations, renormalized particle-hole state densities, a combined MSD/MSC model for pre-equilibrium reactions and compound nucleus calculations. The relation with the evaluation of nuclear data files is discussed. (orig.)

  7. Thermal-hydraulic feedback model to calculate the neutronic cross-section in PWR reactions

    International Nuclear Information System (INIS)

    Santiago, Daniela Maiolino Norberto

    2011-01-01

    In neutronic codes,it is important to have a thermal-hydraulic feedback module. This module calculates the thermal-hydraulic feedback of the fuel, that feeds the neutronic cross sections. In the neutronic co de developed at PEN / COPPE / UFRJ, the fuel temperature is obtained through an empirical model. This work presents a physical model to calculate this temperature. We used the finite volume technique of discretized the equation of temperature distribution, while calculation the moderator coefficient of heat transfer, was carried out using the ASME table, and using some of their routines to our program. The model allows one to calculate an average radial temperature per node, since the thermal-hydraulic feedback must follow the conditions imposed by the neutronic code. The results were compared with to the empirical model. Our results show that for the fuel elements near periphery, the empirical model overestimates the temperature in the fuel, as compared to our model, which may indicate that the physical model is more appropriate to calculate the thermal-hydraulic feedback temperatures. The proposed model was validated by the neutronic simulator developed in the PEN / COPPE / UFRJ for analysis of PWR reactors. (author)

  8. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...

  9. The Risoe model for calculating the consequences of the release of radioactive material to the atmosphere

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.

    1980-07-01

    A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author)

  10. Bilinear slack span calculation model. Slack span calculations for high-temperature cables; Bilineares Berechnungsmodell fuer Durchhangberechnungen. Durchhangberechnungen bei Hochtemperaturleitern

    Energy Technology Data Exchange (ETDEWEB)

    Scheel, Joerg; Dib, Ramzi [Fachhochschule Giessen-Friedberg, Friedberg (Germany); Sassmannshausen, Achim [DB Energie GmbH, Frankfurt (Main) (Germany). Arbeitsgebiet Bahnstromleitungen Energieerzeugungs- und Uebertragungssysteme; Riedl, Markus [Eon Netz GmbH, Bayreuth (Germany). Systemtechnik Leitungen

    2010-12-13

    Increasingly, high-temperature cables are used in high-voltage grids. Beyond a given temperature level, their slack span cannot be calculated accurately by conventional simple linear methods. The contribution investigates the behaviour of composite cables at high operating temperatures and its influence on the slack span and presents a more accurate, bilingual calculation method. (orig.)

  11. A heterogeneous model for burnup calculation in high temperature gas-cooled reactors

    International Nuclear Information System (INIS)

    Perfetti, C. M.; Angahie, S.; Baxter, A.; Ellis, C.

    2008-01-01

    A high resolution MCNPX model is developed to simulate nuclear design characteristics and fuel cycle features of High Temperature Gas-Cooled Reactors. Contrary to the conventional approach in the MCNPX model, fuel regions containing TRISO particles are not homogenized. A cube corner distribution approximation is used to directly model randomly dispersed TRISO fuel particles in a graphite matrix. The universe filling technique is used cover the entire range of fuel particles in the core. The heterogeneous MCNPX model is applied to simulate and analyze the complete fuel cycle of the General Atomics Plutonium-Consumption Modular Helium Reactor (PC-MHR). The PC-MHR reactor design is a variation of the General Atomic MHR design and is designed for the consumption or burning of excess Russian weapons plutonium. The MCNPX burnup calculation of the PC-MHR includes the simulation of a 260 effective full-power day fuel cycle at 600 MWt. Results of the MCNPX calculations suggest that during 260 effective full-power day cycle, 40% reduction in the whole core Pu-239 inventory could be achieved. Results of heterogeneous MCNPX burnup calculations in PC-MHR are compared with results of deterministically calculated values obtained from DIF3D codes. For the 260 effective full-power day cycle, the difference in mass Pu-239 mass reduction calculation using heterogeneous MCNPX and homogeneous DIF3D models is 6%. The difference in MCNPX and DIF3D calculated results for higher actinides are mostly higher than 6%. (authors)

  12. Formation of decontamination cost calculation model for severe accident consequence assessment

    International Nuclear Information System (INIS)

    Silva, Kampanart; Promping, Jiraporn; Okamoto, Koji; Ishiwatari, Yuki

    2014-01-01

    In previous studies, the authors developed an index “cost per severe accident” to perform a severe accident consequence assessment that can cover various kinds of accident consequences, namely health effects, economic, social and environmental impacts. Though decontamination cost was identified as a major component, it was taken into account using simple and conservative assumptions, which make it difficult to have further discussions. The decontamination cost calculation model was therefore reconsidered. 99 parameters were selected to take into account all decontamination-related issues, and the decontamination cost calculation model was formed. The distributions of all parameters were determined. A sensitivity analysis using the Morris method was performed in order to identify important parameters that have large influence on the cost per severe accident and large extent of interactions with other parameters. We identified 25 important parameters, and fixed most negligible parameters to the median of their distributions to form a simplified decontamination cost calculation model. Calculations of cost per severe accident with the full model (all parameters distributed), and with the simplified model were performed and compared. The differences of the cost per severe accident and its components were not significant, which ensure the validity of the simplified model. The simplified model is used to perform a full scope calculation of the cost per severe accident and compared with the previous study. The decontamination cost increased its importance significantly. (author)

  13. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  14. Power Loss Calculation and Thermal Modelling for a Three Phase Inverter Drive System

    Directory of Open Access Journals (Sweden)

    Z. Zhou

    2005-12-01

    Full Text Available Power losses calculation and thermal modelling for a three-phase inverter power system is presented in this paper. Aiming a long real time thermal simulation, an accurate average power losses calculation based on PWM reconstruction technique is proposed. For carrying out the thermal simulation, a compact thermal model for a three-phase inverter power module is built. The thermal interference of adjacent heat sources is analysed using 3D thermal simulation. The proposed model can provide accurate power losses with a large simulation time-step and suitable for a long real time thermal simulation for a three phase inverter drive system for hybrid vehicle applications.

  15. 3D Printing of Molecular Models with Calculated Geometries and p Orbital Isosurfaces

    Science.gov (United States)

    Carroll, Felix A.; Blauch, David N.

    2017-01-01

    3D printing was used to prepare models of the calculated geometries of unsaturated organic structures. Incorporation of p orbital isosurfaces into the models enables students in introductory organic chemistry courses to have hands-on experience with the concept of orbital alignment in strained and unstrained p systems.

  16. A model for bootstrap current calculations with bounce averaged Fokker-Planck codes

    NARCIS (Netherlands)

    Westerhof, E.; Peeters, A.G.

    1996-01-01

    A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the

  17. Development of a risk-based mine closure cost calculation model

    CSIR Research Space (South Africa)

    Du

    2006-06-01

    Full Text Available The study summarised in this paper focused on expanding existing South African mine closure cost calculation models to provide a new model that incorporates risks, which could have an effect on the closure costs during the life cycle of the mine...

  18. On the applicability of nearly free electron model for resistivity calculations in liquid metals

    International Nuclear Information System (INIS)

    Gorecki, J.; Popielawski, J.

    1982-09-01

    The calculations of resistivity based on the nearly free electron model are presented for many noble and transition liquid metals. The triple ion correlation is included in resistivity formula according to SCQCA approximation. Two different methods for describing the conduction band are used. The problem of applicability of the nearly free electron model for different metals is discussed. (author)

  19. Diameter structure modeling and the calculation of plantation volume of black poplar clones

    Directory of Open Access Journals (Sweden)

    Andrašev Siniša

    2004-01-01

    Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.

  20. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    Science.gov (United States)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  1. Modeling of water lighting process and calculation of the reactor-clarifier to improve energy efficiency

    Science.gov (United States)

    Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy

    2017-10-01

    The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.

  2. Efficient matrix-vector products for large-scale nuclear Shell-Model calculations

    OpenAIRE

    Toivanen, J.

    2006-01-01

    A method to accelerate the matrix-vector products of j-scheme nuclear Shell-Model Configuration Interaction (SMCI) calculations is presented. The method takes advantage of the matrix product form of the j-scheme proton-neutron Hamiltonian matrix. It is shown that the method can speed up unrestricted large-scale pf-shell calculations by up to two orders of magnitude compared to previously existing related j-scheme method. The new method allows unrestricted SMCI calculations up to j-scheme dime...

  3. SITE-94. Adaptation of mechanistic sorption models for performance assessment calculations

    International Nuclear Information System (INIS)

    Arthur, R.C.

    1996-10-01

    Sorption is considered in most predictive models of radionuclide transport in geologic systems. Most models simulate the effects of sorption in terms of empirical parameters, which however can be criticized because the data are only strictly valid under the experimental conditions at which they were measured. An alternative is to adopt a more mechanistic modeling framework based on recent advances in understanding the electrical properties of oxide mineral-water interfaces. It has recently been proposed that these 'surface-complexation' models may be directly applicable to natural systems. A possible approach for adapting mechanistic sorption models for use in performance assessments, using this 'surface-film' concept, is described in this report. Surface-acidity parameters in the Generalized Two-Layer surface complexation model are combined with surface-complexation constants for Np(V) sorption ob hydrous ferric oxide to derive an analytical model enabling direct calculation of corresponding intrinsic distribution coefficients as a function of pH, and Ca 2+ , Cl - , and HCO 3 - concentrations. The surface film concept is then used to calculate whole-rock distribution coefficients for Np(V) sorption by altered granitic rocks coexisting with a hypothetical, oxidized Aespoe groundwater. The calculated results suggest that the distribution coefficients for Np adsorption on these rocks could range from 10 to 100 ml/g. Independent estimates of K d for Np sorption in similar systems, based on an extensive review of experimental data, are consistent, though slightly conservative, with respect to the calculated values. 31 refs

  4. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  5. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    DONG Jing

    2015-06-01

    Full Text Available Hepatitis B virus (HBV-related hepatocellular carcinoma (HCC is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significance of currently used predictive models for HBV-associated HCC and assesses the advantages and limits of risk calculators. Updated REACH-B model and LSM-HCC model show better negative predictive values and have better performance in predicting the outcomes of patients with chronic hepatitis B (CHB. These models can be applied to stratified screening of HCC and, meanwhile, become an assessment tool for the management of CHB patients.

  6. A revised oceanographic model to calculate the limiting capacity of the ocean to accept radioactive waste

    International Nuclear Information System (INIS)

    Webb, G.A.M.; Grimwood, P.D.

    1976-12-01

    This report describes an oceanographic model which has been developed for the use in calculating the capacity of the oceans to accept radioactive wastes. One component is a relatively short-term diffusion model which is based on that described in an earlier report (Webb et al., NRPB-R14(1973)), but which has been generalised to some extent. Another component is a compartment model which is used to calculate long-term widespread water concentrations. This addition overcomes some of the short comings of the earlier diffusion model. Incorporation of radioactivity into deep ocean sediments is included in this long-term model as a removal mechanism. The combined model is used to provide a conservative (safe) estimate of the maximum concentrations of radioactivity in water as a function of time after the start of a continuous disposal operation. These results can then be used to assess the limiting capacity of an ocean to accept radioactive waste. (author)

  7. Comparison of Steady-State SVC Models in Load Flow Calculations

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte

    2008-01-01

    This paper compares in a load flow calculation three existing steady-state models of static var compensator (SVC), i.e. the generator-fixed susceptance model, the total susceptance model and the firing angle model. The comparison is made in terms of the voltage at the SVC regulated bus, equivalent...... SVC susceptance at the fundamental frequency and the load flow convergence rate both when SVC is operating within and on the limits. The latter two models give inaccurate results of the equivalent SVC susceptance as compared to the generator model due to the assumption of constant voltage when the SVC...... of the calculated SVC susceptance while retaining acceptable load flow convergence rate....

  8. Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data

    International Nuclear Information System (INIS)

    Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.; Sanuki, T.

    2007-01-01

    Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using other interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied

  9. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  10. A model for calculating the quantum potential for time-varying multi-slit systems

    CERN Document Server

    Bracken, P

    2003-01-01

    A model is proposed and applied to the single and double slit experiments. The model is designed to take into account a change in the experimental setup. This includes opening and closing the slits in some way, or by introducing some object which can be thought of as having a perturbing effect on the space-time background. The single and double slits could be closed simultaneously or one after the other in such a way as to transform from one arrangement to the other. The model consists in using modified free particle propagators in such a way that the required integrals for calculating the overall wave function can be calculated. It is supposed that these constants reflect the ambient structure as the experimental situation is modified, and might be calculable with regard to a more fundamental theory.

  11. AMORPHOUS SILICON ELECTRONIC STRUCTURE MODELING AND BASIC ELECTRO-PHYSICAL PARAMETERS CALCULATION

    Directory of Open Access Journals (Sweden)

    B. A. Golodenko

    2014-01-01

    Full Text Available Summary. The amorphous semiconductor has any unique processing characteristics and it is perspective material for electronic engineering. However, we have not authentic information about they atomic structure and it is essential knot for execution calculation they electronic states and electro physical properties. The author's methods give to us decision such problem. This method allowed to calculation the amorphous silicon modeling cluster atomics Cartesian coordinates, determined spectrum and density its electronic states and calculation the basics electro physical properties of the modeling cluster. At that determined numerical means of the energy gap, energy Fermi, electron concentration inside valence and conduction band for modeling cluster. The find results provides real ability for purposeful control to type and amorphous semiconductor charge carriers concentration and else provides relation between atomic construction and other amorphous substance physical properties, for example, heat capacity, magnetic susceptibility and other thermodynamic sizes.

  12. Development of a model for the primary system CAREM reactor's stationary thermohydraulic calculation

    International Nuclear Information System (INIS)

    Gaspar, C.; Abbate, P.

    1990-01-01

    The ESCAREM program oriented to CAREM reactors' stationary thermohydraulic calculation is presented. As CAREM gives variations in relation to models for BWR (Boiling Water Reactors)/PWR (Pressurized Water Reactors) reactors, it was decided to develop a suitable model which allows to calculate: a) if the Steam Generator design is adequate to transfer the power required; b) the circulation flow that occurs in the Primary System; c) the temperature at the entrance (cool branch) and d) the contribution of each component to the pressure drop in the circulation connection. Results were verified against manual calculations and alternative numerical models. An experimental validation at the Thermohydraulic Essays Laboratory is suggested. A parametric analysis series is presented on CAREM 25 reactor, demonstrating operating conditions, at different power levels, as well as the influence of different design aspects. (Author) [es

  13. Tabulation of Mie scattering calculation results for microwave radiative transfer modeling

    Science.gov (United States)

    Yeh, Hwa-Young M.; Prasad, N.

    1988-01-01

    In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.

  14. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  15. Application of the mathematical modelling and human phantoms for calculation of the organ doses

    International Nuclear Information System (INIS)

    Kluson, J.; Cechak, T.

    2005-01-01

    Increasing power of the computers hardware and new versions of the software for the radiation transport simulation and modelling of the complex experimental setups and geometrical arrangement enable to dramatically improve calculation of organ or target volume doses ( dose distributions) in the wide field of medical physics and radiation protection applications. Increase of computers memory and new software features makes it possible to use not only analytical (mathematical) phantoms but also allow constructing the voxel models of human or phantoms with voxels fine enough (e.g. 1·1·1 mm) to represent all required details. CT data can be used for the description of such voxel model geometry .Advanced scoring methods are available in the new software versions. Contribution gives the overview of such new possibilities in the modelling and doses calculations, discusses the simulation/approximation of the dosimetric quantities ( especially dose ) and calculated data interpretation. Some examples of application and demonstrations will be shown, compared and discussed. Present computational tools enables to calculate organ or target volumes doses with new quality of large voxel models/phantoms (including CT based patient specific model ), approximating the human body with high precision. Due to these features has more and more importance and use in the fields of medical and radiological physics, radiation protection, etc. (authors)

  16. Calculation of the band structure of 2d conducting polymers using the network model

    International Nuclear Information System (INIS)

    Sabra, M. K.; Suman, H.

    2007-01-01

    the network model has been used to calculate the band structure the gap energy and Fermi level of conducting polymers in two dimensions. For this purpose, a geometrical classification of possible polymer chains configurations in two dimensions has been introduced leading to a classification of the unit cells based on the number of bonds in them. The model has been applied to graphite in 2D, represented by a three bonds unit cell, and, as a new case, the anti-parallel Polyacetylene chains (PA) in two dimensions, represented by a unit cell with four bons. The results are in good agreement with the first principles calculations. (author)

  17. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion: We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program...... CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, Torus...

  18. A calculation of the ZH → γ H decay in the Littlest Higgs Model

    International Nuclear Information System (INIS)

    Aranda, J I; Ramirez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I

    2016-01-01

    New heavy neutral gauge bosons are predicted in many extensions of the Standard Model, those new bosons are associated with additional gauge symmetries. We present a preliminary calculation of the branching ratio decay for heavy neutral gauge bosons ( Z h ) into γ H in the most popular version of the Little Higgs models. The calculation involves the main contributions at one-loop level induced by fermions, scalars and gauge bosons. Preliminary results show a very suppressed branching ratio of the order of 10 -6 . (paper)

  19. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    Science.gov (United States)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  20. Numerical calculation of flashing from long pipes using a two-field model

    International Nuclear Information System (INIS)

    Rivard, W.C.; Torrey, M.D.

    1976-05-01

    A two-field model for two-phase flows, in which the vapor and liquid phases have different densities, velocities, and temperatures, has been used to calculate the flashing of water from long pipes. The IMF (Implicit Multifield) technique is used to numerically solve the transient equations that govern the dynamics of each phase. The flow physics is described with finite rate phase transitions, interfacial friction, heat transfer, pipe wall friction, and appropriate state equations. The results of the calculations are compared with measured histories of pressure, temperature, and void fraction. A parameter study indicates the relative sensitivity of the results to the various physical models that are used

  1. Dayside ionosphere of Titan: Impact on calculated plasma densities due to variations in the model parameters

    Science.gov (United States)

    Mukundan, Vrinda; Bhardwaj, Anil

    2018-01-01

    A one dimensional photochemical model for the dayside ionosphere of Titan has been developed for calculating the density profiles of ions and electrons under steady state photochemical equilibrium condition. We concentrated on the T40 flyby of Cassini orbiter and used the in-situ measurements from instruments onboard Cassini as input to the model. An energy deposition model is employed for calculating the attenuated photon flux and photoelectron flux at different altitudes in Titan's ionosphere. We used the Analytical Yield Spectrum approach for calculating the photoelectron fluxes. Volume production rates of major primary ions, like, N2+, N+ , CH4+, CH3+, etc due to photon and photoelectron impact are calculated and used as input to the model. The modeled profiles are compared with the Cassini Ion Neutral Mass Spectrometer (INMS) and Langmuir Probe (LP) measurements. The calculated electron density is higher than the observation by a factor of 2 to 3 around the peak. We studied the impact of different model parameters, viz. photoelectron flux, ion production rates, electron temperature, dissociative recombination rate coefficients, neutral densities of minor species, and solar flux on the calculated electron density to understand the possible reasons for this discrepancy. Recent studies have shown that there is an overestimation in the modeled photoelectron flux and N2+ ion production rates which may contribute towards this disagreement. But decreasing the photoelectron flux (by a factor of 3) and N2+ ion production rate (by a factor of 2) decreases the electron density only by 10 to 20%. Reduction in the measured electron temperature by a factor of 5 provides a good agreement between the modeled and observed electron density. The change in HCN and NH3 densities affects the calculated densities of the major ions (HCNH+ , C2H5+, and CH5+); however the overall impact on electron density is not appreciable ( < 20%). Even though increasing the dissociative

  2. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  3. A steady-state target calculation method based on "point" model for integrating processes.

    Science.gov (United States)

    Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei

    2015-05-01

    Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Calculations of thermophysical properties of cubic carbides and nitrides using the Debye-Grueneisen model

    Energy Technology Data Exchange (ETDEWEB)

    Lu Xiaogang [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)]. E-mail: xiaogang@thermocalc.se; Selleby, Malin [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden); Sundman, Bo [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)

    2007-02-15

    The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter {gamma} were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald {gamma} can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus.

  5. Calculations of thermophysical properties of cubic carbides and nitrides using the Debye-Grueneisen model

    International Nuclear Information System (INIS)

    Lu Xiaogang; Selleby, Malin; Sundman, Bo

    2007-01-01

    The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter γ were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald γ can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus

  6. Random geometry model in criticality calculations of solutions containing Raschig rings

    International Nuclear Information System (INIS)

    Teng, S.P.; Lindstrom, D.G.

    1979-01-01

    The criticality constants of fissile solutions containing borated Raschig rings are evaluated using the Monte Carlo code KENO IV with various geometry models. In addition to those used by other investigators, a new geometry model, the random geometry model, is presented to simulate the system of randomly oriented Raschig rings in solution. A technique to obtain the material thickness distribution functions of solution and rings for use in the random geometry model is also presented. Comparison between the experimental data and the calculated results using Monte Carlo method with various geometry models indicates that the random geometry model is a reasonable alternative to models previously used in describing the system of Raschig-ring-filled solution. The random geometry model also provides a solution to the problem of describing an array containing Raschig-ring-filled tanks that is not available to techniques using other models

  7. Propagation of Uncertainty in System Parameters of a LWR Model by Sampling MCNPX Calculations - Burnup Analysis

    Science.gov (United States)

    Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.

    2014-06-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.

  8. Hydroelastic model of PWR reactor internals SAFRAN 1 - Validation of a vibration calculation method

    International Nuclear Information System (INIS)

    Epstein, A.; Gibert, R.J.; Jeanpierre, F.; Livolant, M.

    1978-01-01

    The SAFRAN 1 test loop consists of an hydroelastic similitude of a 1/8 scale model of a 3 loop P.W.R. Vibrations of the main internals (thermal shield and core barrel) and pressure fluctuations in water thin sections between vessel and internals, and in inlet and outlet pipes, have been measured. The calculation method consists of: an evaluation of the main vibration and acoustic sources owing to the flow (unsteady jet impingement on the core barrel, turbulent flow in a water thin section). A calculation of the internal modal parameters taking into account the inertial effects of fluid (the computer codes AQUAMODE and TRISTANA have been used). A calculation of the acoustic response of the circuit (the computer code VIBRAPHONE has been used). The good agreement between the calculation and the experimental results allows using this method with better security for the prediction of the vibration levels of full scale P.W.R. internals

  9. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  10. Preliminary integrated calculation of radionuclide cation and anion transport at Yucca Mountain using a geochemical model

    International Nuclear Information System (INIS)

    Birdsell, K.H.; Campbell, K.; Eggert, K.G.; Travis, B.J.

    1989-01-01

    This paper presents preliminary transport calculations for radionuclide movement at Yucca Mountain using preliminary data for mineral distributions, retardation parameter distributions, and hypothetical recharge scenarios. These calculations are not performance assessments, but are used to study the effectiveness of the geochemical barriers at the site at mechanistic level. The preliminary calculations presented have many shortcomings and should be viewed only as a demonstration of the modeling methodology. The simulations were run with TRACRN, a finite-difference porous flow and radionuclide transport code developed for the Yucca Mountain Project. Approximately 30,000 finite-difference nodes are used to represent the unsaturated and saturated zones underlying the repository in three dimensions. Sorption ratios for the radionuclides modeled are assumed to be functions of mineralogic assemblages of the underlying rock. These transport calculations present a representative radionuclide cation, 135 Cs and anion, 99 Tc. The effects on transport of many of the processes thought to be active at Yucca Mountain may be examined using this approach. The model provides a method for examining the integration of flow scenarios, transport, and retardation processes as currently understood for the site. It will also form the basis for estimates of the sensitivity of transport calculations to retardation processes. 11 refs., 17 figs., 1 tab

  11. Program realization of mathematical model of kinetostatical calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Global computerization determined the dominant position of the analytical methods for the study of mechanisms. As a result, kinetostatics analysis of mechanisms using software packages is an important part of scientific and practical activities of engineers and designers. Therefore, software implementation of mathematical models kinetostatical calculating mechanisms is of practical interest. The mathematical model obtained in [1]. In the language of Turbo Pascal developed a computer procedure that calculates the forces in kinematic pairs in groups Assur (GA and a balancing force at the primary level. Before use appropriate computational procedures it is necessary to know all external forces and moments acting on the GA and to determine the inertial forces and moments of inertia forces. The process of calculations and constructions of the provisions of the mechanism can be summarized as follows. Organized cycle in which to calculate the position of an initial link of the mechanism. Calculate the position of the remaining links of the mechanism by referring to relevant procedures module DIADA in GA [2,3]. Using the graphics mode of the computer displaying on the display the position of the mechanism. The computed inertial forces and moments of inertia forces. Turning to the corresponding procedures of the module, calculated all the forces in kinematic pairs and the balancing force at the primary level. In each kinematic pair build forces and their direction with the help of simple graphical procedures. The magnitude of these forces and their direction are displayed in a special window with text mode. This work contains listings of the test programs MyTеst, is an example of using computing capabilities of the developed module. As a check on the calculation procedures of module in the program is reproduced an example of calculating the balancing forces according to the method of Zhukovsky (Zhukovsky lever.

  12. Monitoring driver fatigue using a single-channel electroencephalographic device: A validation study by gaze-based, driving performance, and subjective data.

    Science.gov (United States)

    Morales, José M; Díaz-Piedra, Carolina; Rieiro, Héctor; Roca-González, Joaquín; Romero, Samuel; Catena, Andrés; Fuentes, Luis J; Di Stasi, Leandro L

    2017-12-01

    Driver fatigue can impair performance as much as alcohol does. It is the most important road safety concern, causing thousands of accidents and fatalities every year. Thanks to technological developments, wearable, single-channel EEG devices are now getting considerable attention as fatigue monitors, as they could help drivers to assess their own levels of fatigue and, therefore, prevent the deterioration of performance. However, the few studies that have used single-channel EEG devices to investigate the physiological effects of driver fatigue have had inconsistent results, and the question of whether we can monitor driver fatigue reliably with these EEG devices remains open. Here, we assessed the validity of a single-channel EEG device (TGAM-based chip) to monitor changes in mental state (from alertness to fatigue). Fifteen drivers performed a 2-h simulated driving task while we recorded, simultaneously, their prefrontal brain activity and saccadic velocity. We used saccadic velocity as the reference index of fatigue. We also collected subjective ratings of alertness and fatigue, as well as driving performance. We found that the power spectra of the delta EEG band showed an inverted U-shaped quadratic trend (EEG power spectra increased for the first hour and half, and decreased during the last thirty minutes), while the power spectra of the beta band linearly increased as the driving session progressed. Coherently, saccadic velocity linearly decreased and speeding time increased, suggesting a clear effect of fatigue. Subjective data corroborated these conclusions. Overall, our results suggest that the TGAM-based chip EEG device is able to detect changes in mental state while performing a complex and dynamic everyday task as driving. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  14. Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.

    Science.gov (United States)

    Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong

    2012-10-17

    We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.

  15. Calculational model for condensation of water vapor during an underground nuclear detonation

    International Nuclear Information System (INIS)

    Knox, R.J.

    1975-01-01

    An empirally derived mathematical model was developed to calculate the pressure and temperature history during condensation of water vapor in an underground-nuclear-explosion cavity. The condensation process is non-isothermal. Use has been made of the Clapeyron-Clausius equation as a basis for development of the model. Analytic fits to the vapor pressure and the latent heat of vaporization for saturated-water vapor, together with an estimated value for the heat-transfer coefficient, have been used to describe the phenomena. The calculated pressure-history during condensation has been determined to be exponential, with a time constant somewhat less than that observed during the cooling of the superheated steam from the explosion. The behavior of the calculated condensation-pressure compares well with the observed-pressure record (until just prior to cavity collapse) for a particular nuclear-detonation event for which data is available

  16. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  17. Investigation of the influence of the open cell foam models geometry on hydrodynamic calculation

    Science.gov (United States)

    Soloveva, O. V.; Solovev, S. A.; Khusainov, R. R.; Popkova, O. S.; Panenko, D. O.

    2018-01-01

    A geometrical model of the open cell foam was created as an ordered set of intersecting spheres. The proposed model closely describes a real porous cellular structure. The hydrodynamics flow was calculated on the basis of a simple model in the ANSYS Fluent software package. A pressure drop was determined, the value of which was compared with the experimental data of other authors. As a result of the conducted studies, we found that a porous structure with smoothed faces provides the smallest pressure drop with the same porosity of the package. Analysis of the calculated data demonstrated that the approximation of an elementary porous cell substantially distorts the flow field. This is undesirable in detailed modeling of the open cell foam.

  18. Calculation model for 16N transit time in the secondary side of steam generators

    International Nuclear Information System (INIS)

    Liu Songyu; Xu Jijun; Xu Ming

    1998-01-01

    The 16 N transit time is essential to determine the leak-rate of steam generator tubes leaks with 16 N monitoring system, which is a new technique. A model was developed for calculation 16 N transit time in the secondary side of steam generators. According to the flow characters of secondary side fluid, the transit times divide into four sectors from tube sheet to the sensor on steam line. The model assumes that 16 N is moving as vapor phase in the secondary-side. So the model for vapor velocity distribution in tube bundle is presented in detail. The 16 N transit time calculation results of this model compare with these of EDF on steam generator of Qinshan NPP

  19. Calculations of Inflaton Decays and Reheating: with Applications to No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, $w$, during the epoch of inflaton decay, the reheating temperature, $T_{\\rm reh}$, and the number of inflationary e-folds, $N_*$, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index $n_s$ and the tensor-to-scalar perturbation ratio $r$, converting them into constraints on $N_*$, the inflaton decay rate and other parameters of specific no-scale inflationary models.

  20. A new simulation model for calculating the internal exposure of some radionuclides

    Directory of Open Access Journals (Sweden)

    Mahrous Ayman

    2009-01-01

    Full Text Available A new model based on a series of mathematical functions for estimating excretion rates following the intake of nine different radionuclides is presented in this work. The radionuclides under investigation are: cobalt, iodine, cesium, strontium, ruthenium, radium, thorium, plutonium, and uranium. The committed effective dose has been calculated by our model so as to obtain the urinary and faecal excretion rates for each radionuclide. The said model is further validated by a comparison with the widely spread Mondal software and a simulation program. The results obtained show a harmony between the Mondal package and the model we have constructed.

  1. A computer code for calculations in the algebraic collective model of the atomic nucleus

    OpenAIRE

    Welsh, T. A.; Rowe, D. J.

    2014-01-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functi...

  2. Calculation of spherical models of lead with a source of 14 MeV-neutrons

    International Nuclear Information System (INIS)

    Markovskij, D.V.; Borisov, A.A.

    1989-01-01

    Neutron transport calculations for spherical models of lead have been done with the one-dimensional code BLANK realizing the direct Monte Carlo method in the whole range of neutron energies and they are compared with the experimental results. 6 refs, 10 figs, 3 tabs

  3. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends ...

  4. Black Hole Entropy Calculation in a Modified Thin Film Model Jingyi ...

    Indian Academy of Sciences (India)

    Abstract. The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–. Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the area, ...

  5. Model-Independent Calculation of Radiative Neutron Capture on Lithium-7

    NARCIS (Netherlands)

    Rupak, Gautam; Higa, Renato

    2011-01-01

    The radiative neutron capture on lithium-7 is calculated model independently using a low-energy halo effective field theory. The cross section is expressed in terms of scattering parameters directly related to the S-matrix elements. It depends on the poorly known p-wave effective range parameter

  6. Scheme for calculation of multi-layer cloudiness and precipitation for climate models of intermediate complexity

    NARCIS (Netherlands)

    Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.

    2013-01-01

    In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and

  7. SHARC, a model for calculating atmospheric infrared radiation under non-equilibrium conditions

    Science.gov (United States)

    Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Matthew, M. W.; Adler-Golden, S. M.; Robertson, D. C.; Sharma, R. D.; Brown, J. H.; Healey, R. J.

    A new computer model, SHARC, has been developed by the U.S. Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1 cm 4. Comprehensive coverage of the 2 to 40 μm (250 to 5,000 cm-1) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in state populations by explicitly modeling the detailed production, loss, and energy transfer processes among the contributing molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach calculates LOS radiance values which are accurate to ±10% and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO+ in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE and FWI field experiments are presented.

  8. Recursive calculation of matrix elements for the generalized seniority shell model

    International Nuclear Information System (INIS)

    Luo, F.Q.; Caprio, M.A.

    2011-01-01

    A recursive calculational scheme is developed for matrix elements in the generalized seniority scheme for the nuclear shell model. Recurrence relations are derived which permit straightforward and efficient computation of matrix elements of one-body and two-body operators and basis state overlaps.

  9. Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter

    NARCIS (Netherlands)

    Alavi, A.; Parrinello, M.; Frenkel, D.

    1995-01-01

    First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of

  10. Improved method for the cutting coefficients calculation in micromilling force modeling

    NARCIS (Netherlands)

    Li, P.; Oosterling, J.A.J.; Hoogstrate, A.M.; Langen, H.H.

    2008-01-01

    This paper discusses the influence of runout on the calculation of the coefficients of mechanistic force models in micromilling. A runout mode is used to study the change of chip thickness, tool angles, and immersion period of two cutting edges of micro endmills due to runout. A new method to find

  11. A new timing model for calculating the intrinsic timing resolution of a scintillator detector

    International Nuclear Information System (INIS)

    Shao Yiping

    2007-01-01

    The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions

  12. On large-scale shell-model calculations in sup 4 He

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)

    1990-03-01

    Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).

  13. Covariance matrices for nuclear cross sections derived from nuclear model calculations

    International Nuclear Information System (INIS)

    Smith, D. L.

    2005-01-01

    The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology

  14. MODEL OF TAKEOFF AND LANDING OPERATIONS FOR CALCULATING OF AERODROME capacity

    Directory of Open Access Journals (Sweden)

    I. Yu. Agafonova

    2014-01-01

    Full Text Available The procedures for takeoff and landing of aircraft flow are discussed. An approach to the construction of a model for calculation of aerodrome capacity is proposed. Decomposition of model is conducted and one of its elements - the approach mode is investigated. The estimation of the time interval for this mode and limitations on the minimum distances between aircraft in the stream are shown.

  15. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    OpenAIRE

    DONG Jing

    2015-01-01

    Hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significa...

  16. A three-dimensional model for calculating the micro disk laser resonant-modes

    International Nuclear Information System (INIS)

    Sabetjoo, H.; Bahrampor, A.; Farrahi-Moghaddam, R.

    2006-01-01

    In this article, a semi-analytical model for theoretical analysis of micro disk lasers is presented. Using this model, the necessary conditions for the existence of loss less and low-loss modes of micro-resonators are obtained. The resonance frequency of the resonant modes and also the attenuation of low-loss modes are calculated. By comparing the results with results of finite difference method, their validity is certified.

  17. A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions

    International Nuclear Information System (INIS)

    Watson, E.E.; Stabin, M.G.

    1986-01-01

    Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs

  18. Calculational analysis of errors for various models of an experiment on measuring leakage neutron spectra

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.; Deeva, V.V.; Prokof'eva, Z.A.

    1990-01-01

    Analysis is made for the effect of mathematical model accuracy of the system concerned on the calculation results using the BRAND program system. Consideration is given to the impact of the following factors: accuracy of neutron source energy-angular characteristics description, various degrees of system geometry approximation, adequacy of Monte-Carlo method estimation to a real physical neutron detector. The calculation results analysis is made on the basis of the experiments on leakage neutron spectra measurement in spherical lead assemblies with the 14 MeV-neutron source in the centre. 4 refs.; 2 figs.; 10 tabs

  19. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given

  20. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.

  1. Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn

    Energy Technology Data Exchange (ETDEWEB)

    Yakhelef, A.; Bouldjedri, A. [Physics Department, Farhat abbas University, Setif (Algeria); Physics Department, Hadj Lakhdar University, Batna (Algeria)

    2012-06-27

    New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.

  2. A brief look at model-based dose calculation principles, practicalities, and promise.

    Science.gov (United States)

    Sloboda, Ron S; Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V

    2017-02-01

    Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources.

  3. A model for the calculation of the radiation dose from natural radionuclides in The Netherlands

    International Nuclear Information System (INIS)

    Ackers, J.G.

    1986-02-01

    A model has been developed to calculate the radiation dose incurred from natural radioactivity indoors and outdoors, expressed in effective dose equivalence/year. The model is applied on a three rooms dwelling characterized by interconnecting air flows and on a dwelling with crawlspace. In this model the distinct parameters are variable in order to allow the investigation of the relative influence. The calculated effective dose equivalent for an adult in the dwelling was calculated to be about 1.7 mSv/year, composed of 15% from cosmic radiation, 35% from terrestrial radioactivity, 20% from radioactivity in the body and 30% from natural radionuclides in building materials. The calculations show an enhancement of about a factor of two in radon concentration in air in a room which is ventilated by air from an adjacent room. It is also shown that the attachment rate of radon products to aerosols and the plate-out effect are relatively important parameters influencing the magnitude of the dose rate. (Auth.)

  4. The development of early pediatric models and their application to radiation absorbed dose calculations

    International Nuclear Information System (INIS)

    Poston, J.W.

    1989-01-01

    This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ''pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ''individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed

  5. A new model for the accurate calculation of natural gas viscosity

    Directory of Open Access Journals (Sweden)

    Xiaohong Yang

    2017-03-01

    Full Text Available Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental data, the diagram showing the variation of viscosity along with temperature and density is prepared, showing that: ① the gas viscosity increases with the increase of density as well as the increase of temperature in the low density region; ② the gas viscosity increases with the decrease of temperature in high density region. With this new model, the viscosity of 9 natural gas samples was calculated precisely. The average relative deviation between these calculated values and 1539 experimental data measured at 250–450 K and 0.10–140.0 MPa is less than 1.9%. Compared with the 793 experimental data with a measurement error less than 0.5%, the maximum relative deviation is less than 0.98%. It is concluded that this new model is more advantageous than the previous 8 models in terms of simplicity, accuracy, fast calculation, and direct applicability to the CO2 bearing gas samples.

  6. Development of a model to calculate the economic implications of improving the indoor climate

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge

    in the indoor environment. Office workers exposed to the same indoor environment conditions will in many cases wear different clothing, have different metabolic rates, experience micro environment differences etc. all factors that make it difficult to estimate the effects of the indoor environment...... have been developed; one model estimating the effects of indoor temperature on mental performance and one model estimating the effects of air quality on mental performance. Combined with dynamic building simulations and dose-response relationships, the derived models were used to calculate the total...... on performance. The Bayesian Network uses a probabilistic approach by which a probability distribution can take this variation of the different indoor variables into account. The result from total building economy calculations indicated that depending on the indoor environmental change (improvement...

  7. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.

    1975-01-01

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  8. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  9. A Novel Partial Discharge Ultra-High Frequency Signal De-Noising Method Based on a Single-Channel Blind Source Separation Algorithm

    Directory of Open Access Journals (Sweden)

    Liangliang Wei

    2018-02-01

    Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.

  10. A four-equation friction model for water hammer calculation in quasi-rigid pipelines

    International Nuclear Information System (INIS)

    Ghodhbani, Abdelaziz; Haj Taïeb, Ezzeddine

    2017-01-01

    Friction coupling affects water hammer evolution in pipelines according to the initial flow regime. Unsteady friction models are only validated with uncoupled formulation. On the other hand, coupled models such as four-equation model, provide more accurate prediction of water hammer since fluid-structure interaction (FSI) is taken into account, but they are limited to steady-state friction formulation. This paper deals with the creation of the “four-equation friction model” which is based on the incorporation of the unsteady head loss given by an unsteady friction model into the four-equation model. For transient laminar flow cases, the Zielke model is considered. The proposed model is applied to a quasi-rigid pipe with axial moving valve, and then calculated by the method of characteristics (MOC). Damping and shape of the numerical solution are in good agreement with experimental data. Thus, the proposed model can be incorporated into a new computer code. - Highlights: • Both Zielke model and four-equation model are insufficient to predict water hammer. • The four-equation friction model proposed is obtained by incorporating the unsteady head loss in the four-equation model. • The solution obtained by the proposed model is in good agreement with experimental data. • The wave-speed adjustment scheme is more efficient than interpolations schemes.

  11. The High Level Mathematical Models in Calculating Aircraft Gas Turbine Engine Parameters

    Directory of Open Access Journals (Sweden)

    Yu. A. Ezrokhi

    2017-01-01

    Full Text Available The article describes high-level mathematical models developed to solve special problems arising at later stages of design with regard to calculation of the aircraft gas turbine engine (GTE under real operating conditions. The use of blade row mathematics models, as well as mathematical models of a higher level, including 2D and 3D description of the working process in the engine units and components, makes it possible to determine parameters and characteristics of the aircraft engine under conditions significantly different from the calculated ones.The paper considers application of mathematical modelling methods (MMM for solving a wide range of practical problems, such as forcing the engine by injection of water into the flowing part, estimate of the thermal instability effect on the GTE characteristics, simulation of engine start-up and windmill starting condition, etc. It shows that the MMM use, when optimizing the laws of the compressor stator control, as well as supplying cooling air to the hot turbine components in the motor system, can significantly improve the integral traction and economic characteristics of the engine in terms of its gas-dynamic stability, reliability and resource.It ought to bear in mind that blade row mathematical models of the engine are designed to solve purely "motor" problems and do not replace the existing models of various complexity levels used in calculation and design of compressors and turbines, because in “quality” a description of the working processes in these units is inevitably inferior to such specialized models.It is shown that the choice of the mathematical modelling level of an aircraft engine for solving a particular problem arising in its designing and computational study is to a large extent a compromise problem. Despite the significantly higher "resolution" and information ability the motor mathematical models containing 2D and 3D approaches to the calculation of flow in blade machine

  12. Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.

    Science.gov (United States)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-01-01

    The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  13. Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model

    Directory of Open Access Journals (Sweden)

    Joakim eda Silva

    2015-12-01

    Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  14. Measurement-based aerosol forcing calculations: The influence of model complexity

    Directory of Open Access Journals (Sweden)

    Manfred Wendisch

    2001-03-01

    Full Text Available On the basis of ground-based microphysical and chemical aerosol measurements a simple 'two-layer-single-wavelength' and a complex 'multiple-layer-multiple-wavelength' radiative transfer model are used to calculate the local solar radiative forcing of black carbon (BC and (NH42SO4 (ammonium sulfate particles and mixtures (external and internal of both materials. The focal points of our approach are (a that the radiative forcing calculations are based on detailed aerosol measurements with special emphasis of particle absorption, and (b the results of the radiative forcing calculations with two different types of models (with regards to model complexity are compared using identical input data. The sensitivity of the radiative forcing due to key input parameters (type of particle mixture, particle growth due to humidity, surface albedo, solar zenith angle, boundary layer height is investigated. It is shown that the model results for external particle mixtures (wet and dry only slightly differ from those of the corresponding internal mixture. This conclusion is valid for the results of both model types and for both surface albedo scenarios considered (grass and snow. Furthermore, it is concluded that the results of the two model types approximately agree if it is assumed that the aerosol particles are composed of pure BC. As soon as a mainly scattering substance is included alone or in (internal or external mixture with BC, the differences between the radiative forcings of both models become significant. This discrepancy results from neglecting multiple scattering effects in the simple radiative transfer model.

  15. Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain

    Directory of Open Access Journals (Sweden)

    Feng Sun

    2015-01-01

    Full Text Available Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.

  16. Generalized Born and Explicit Solvent Models for Free Energy Calculations in Organic Solvents: Cyclodextrin Dimerization.

    Science.gov (United States)

    Zhang, Haiyang; Tan, Tianwei; van der Spoel, David

    2015-11-10

    Evaluation of solvation (binding) free energies with implicit solvent models in different dielectric environments for biological simulations as well as high throughput ligand screening remain challenging endeavors. In order to address how well implicit solvent models approximate explicit ones we examined four generalized Born models (GB(Still), GB(HCT), GB(OBC)I, and GB(OBC)II) for determining the dimerization free energy (ΔG(0)) of β-cyclodextrin monomers in 17 implicit solvents with dielectric constants (D) ranging from 5 to 80 and compared the results to previous free energy calculations with explicit solvents ( Zhang et al. J. Phys. Chem. B 2012 , 116 , 12684 - 12693 ). The comparison indicates that neglecting the environmental dependence of Born radii appears acceptable for such calculations involving cyclodextrin and that the GB(Still) and GB(OBC)I models yield a reasonable estimation of ΔG(0), although the details of binding are quite different from explicit solvents. Large discrepancies between implicit and explicit solvent models occur in high-dielectric media with strong hydrogen bond (HB) interruption properties. ΔG(0) with the GB models is shown to correlate strongly to 2(D-1)/(2D+1) (R(2) ∼ 0.90) in line with the Onsager reaction field ( Onsager J. Am. Chem. Soc. 1936 , 58 , 1486 - 1493 ) but to be very sensitive to D (D J. Chem. Inf. Model . 2015 , 55 , 1192 - 1201 ) reproduce the weak experimental correlations with 2(D-1)/(2D+1) very well.

  17. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  18. Iron -chromium alloys and free surfaces: from ab initio calculations to thermodynamic modeling

    International Nuclear Information System (INIS)

    Levesque, M.

    2010-11-01

    Ferritic steels possibly strengthened by oxide dispersion are candidates as structural materials for generation IV and fusion nuclear reactors. Their use is limited by incomplete knowledge of the iron-chromium phase diagram at low temperatures and of the phenomena inducing preferential segregation of one element at grain boundaries or at surfaces. In this context, this work contributes to the multi-scale study of the model iron-chromium alloy and their free surfaces by numerical simulations. This study begins with ab initio calculations of properties related to the mixture of atoms of iron and chromium. We highlight complex dependency of the magnetic moments of the chromium atoms on their local chemical environment. Surface properties are also proving sensitive to magnetism. This is the case of impurity segregation of chromium in iron and of their interactions near the surface. In a second step, we construct a simple energy model for high numerical efficiency. It is based on pair interactions on a rigid lattice to which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that we compare to Monte Carlo simulations. The last step of our work is to introduce free surfaces in our model. We then study the effect of ab initio calculated bulk and surface properties on surface segregation.Finally, we calculate segregation isotherms. We therefore propose an evolution model of surface composition of iron-chromium alloys as a function of bulk composition. which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that

  19. Optimal Calculation of Residuals for ARMAX Models with Applications to Model Verification

    DEFF Research Database (Denmark)

    Knudsen, Torben

    1997-01-01

    Residual tests for sufficient model orders are based on the assumption that prediction errors are white when the model is correct. If an ARMAX system has zeros in the MA part which are close to the unit circle, then the standard predictor can have large transients. Even when the correct model...

  20. Development and application of the PBMR fission product release calculation model

    International Nuclear Information System (INIS)

    Merwe, J.J. van der; Clifford, I.

    2008-01-01

    At PBMR, long-lived fission product release from spherical fuel spheres is calculated using the German legacy software product GETTER. GETTER is a good tool when performing calculations for fuel spheres under controlled operating conditions, including irradiation tests and post-irradiation heat-up experiments. It has proved itself as a versatile reactor analysis tool, but is rather cumbersome when used for accident and sensitivity analysis. Developments in depressurized loss of forced cooling (DLOFC) accident analysis using GETTER led to the creation of FIssion Product RElease under accident (X) conditions (FIPREX), and later FIPREX-GETTER. FIPREX-GETTER is designed as a wrapper around GETTER so that calculations can be carried out for large numbers of fuel spheres with design and operating parameters that can be stochastically varied. This allows full Monte Carlo sensitivity analyses to be performed for representative cores containing many fuel spheres. The development process and application of FIPREX-GETTER in reactor analysis at PBMR is explained and the requirements for future developments of the code are discussed. Results are presented for a sample PBMR core design under normal operating conditions as well as a suite of design-base accident events, illustrating the functionality of FIPREX-GETTER. Monte Carlo sensitivity analysis principles are explained and presented for each calculation type. The plan and current status of verification and validation (V and V) is described. This is an important and necessary process for all software and calculation model development at PBMR

  1. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  2. Analytical calculation of detailed model parameters of cast resin dry-type transformers

    International Nuclear Information System (INIS)

    Eslamian, M.; Vahidi, B.; Hosseinian, S.H.

    2011-01-01

    Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.

  3. SHARC, a model for calculating atmospheric and infrared radiation under non-equilibrium conditions

    Science.gov (United States)

    Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Sharma, R. D.

    1994-01-01

    A new computer model, SHARC, has been developed by the Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1/cm. Comprehensive coverage of the 2 to 40 microns (250/cm to 5,000/cm) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in vibrational state populations by explicitly modeling the detailed production, loss, and energy transfer process among the important molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach is reasonably accurate for most applications and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO(+) in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE, and FWI field experiments are presented.

  4. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M

    2004-01-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams

  5. The ratio of ICRP103 to ICRP60 calculated effective doses from CT: Monte Carlo calculations with the ADELAIDE voxel paediatric model and comparisons with published values.

    Science.gov (United States)

    Caon, Martin

    2013-09-01

    The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5% but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6%, for CT abdomen (by 9.5%), for CT chest + abdomen + pelvis (by 6%), for CT chest + abdomen (by 9.6%), for CT chest (by 10.1%) and for cardiac CT (by 11.5%). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.

  6. The ratio of ICRP103 to ICRP60 calculated effective doses from CT: Monte Carlo calculations with the ADELAIDE voxel paediatric model and comparisons with published values

    International Nuclear Information System (INIS)

    Caon, Martin

    2013-01-01

    The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.

  7. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Directory of Open Access Journals (Sweden)

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  8. Application of a Monte Carlo linac model in routine verifications of dose calculations

    International Nuclear Information System (INIS)

    Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.

    2015-01-01

    The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)

  9. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  10. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  11. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  12. Reply to comment on 'Model calculation of the scanned field enhancement factor of CNTs'

    International Nuclear Information System (INIS)

    Ahmad, Amir; Tripathi, V K

    2010-01-01

    In the paper (Ahmad and Tripathi 2006 Nanotechnology 17 3798), we derived an expression to compute the field enhancement factor of CNTs under any positional distribution of CNTs by using the model of a floating sphere between parallel anode and cathode plates. Using this expression we can compute the field enhancement factor of a CNT in a cluster (non-uniformly distributed CNTs). This expression was used to compute the field enhancement factor of a CNT in an array (uniformly distributed CNTs). We used an approximation to calculate the field enhancement factor. Hence, our expressions are correct in that assumption only. Zhbanov et al (2010 Nanotechnology 21 358001) suggest a correction that can calculate the field enhancement factor without using the approximation. Hence, this correction can improve the applicability of this model. (reply)

  13. Extended wave-packet model to calculate energy-loss moments of protons in matter

    Science.gov (United States)

    Archubi, C. D.; Arista, N. R.

    2017-12-01

    In this work we introduce modifications to the wave-packet method proposed by Kaneko to calculate the energy-loss moments of a projectile traversing a target which is represented in terms of Gaussian functions for the momentum distributions of electrons in the atomic shells. These modifications are introduced using the Levine and Louie technique to take into account the energy gaps corresponding to the different atomic levels of the target. We use the extended wave-packet model to evaluate the stopping power, the energy straggling, the inverse mean free path, and the ionization cross sections for protons in several targets, obtaining good agreements for all these quantities on an extensive energy range that covers low-, intermediate-, and high-energy regions. The extended wave-packet model proposed here provides a method to calculate in a very straightforward way all the significant terms of the inelastic interaction of light ions with any element of the periodic table.

  14. Calculation of search volume on cruise-searching planktivorous fish in foraging model.

    Science.gov (United States)

    Park, Bae Kyung; Lee, Yong Seok; Park, Seok Soon

    2007-07-01

    Search volume of cruising planktivorous fish was calculated based on its detailed behavior Th examine the factors influencing search volume, a series of experiments were conducted by varying ambient conditions, such as structural complexity light intensity and turbidity Pseudorasbora parva were used in experiment as predator and Daphnia pulex was selected as prey The shape of scanning area of P parva showed elliptic and the search volume changed drastically depending on ambient conditions. Compared with the results of previous foraging model, the search volumes of the fish under previous study were larger (1.2 to 2.4 times) than those from our study These results on the changes in feeding rate can be useful in determining microhabitat requirement of P parva and othercyprinids with a similar foraging behavior The calculated search volume is compared with other foraging model andthe effect of zooplankton-planktivore interactions on aquatic ecosystem is discussed.

  15. Modified Bean Model and FEM Method Combined for Persistent Current Calculation in Superconducting Coils

    CERN Document Server

    Völlinger, Christine; Russenschuck, Stephan

    2001-01-01

    Field variations in the LHC superconducting magnets, e. g. during the ramping of the magnets, induce magnetization currents in the superconducting material, the so-called persistent currents that do not decay but persist due to the lack of resistivity. This paper describes a semi-analytical hysteresis model for hard superconductors, which has been developed for the computation of the total field errors arising from persistent currents. Since the superconducting coil is surrounded by a ferromagnetic yoke structure, the persistent current model is combined with the finite element method (FEM), as the non-linear yoke can only be calculated numerically. The used finite element method is based on a reduced vector potential formulation that avoids the meshing of the coil while calculating the part of the field arising from the source currents by means of the Biot-Savart Law. The combination allows to determine persistent current induced field errors as function of the excitation and for arbitrarily shaped iron yoke...

  16. Comparison of inverse dynamics calculated by two- and three-dimensional models during walking

    DEFF Research Database (Denmark)

    Alkjaer, T; Simonsen, E B; Dyhre-Poulsen, P

    2001-01-01

    recorded the subjects as they walked across two force plates. The subjects were invited to approach a walking speed of 4.5 km/h. The ankle, knee and hip joint moments in the sagittal plane were calculated by 2D and 3D inverse dynamics analysis and compared. Despite the uniform walking speed (4.53 km....../h) and similar footwear, relatively large inter-individual variations were found in the joint moment patterns during the stance phase. The differences between individuals were present in both the 2D and 3D analysis. For the entire sample of subjects the overall time course pattern of the ankle, knee and hip...... the magnitude of the joint moments calculated by 2D and 3D inverse dynamics but the inter-individual variation was not affected by the different models. The simpler 2D model seems therefore appropriate for human gait analysis. However, comparisons of gait data from different studies are problematic...

  17. A calculation model for X-ray diffraction by curved-graphene nanoparticles

    International Nuclear Information System (INIS)

    Chernozatonskii, L.A.; Neverov, V.S.; Kukushkin, A.B.

    2012-01-01

    An approximation of the positions of carbon atoms in a curved graphene sheet is suggested for calculation of X-ray diffraction (XRD) patterns of curved-graphene nanoparticles. The model is tested for carbon nanotubes and newly calculated carbon nanotoroids consisting of several hundreds of atoms. It is shown that the random distribution of carbon atoms with graphene surface-averaged density and the local graphene-like rearrangement of atoms in a curved lattice are sufficient for describing the XRD patterns of an ensemble of respective exact carbon nanoparticles of random isotropic orientation in the range of scattering wave vector's modulus q from several units to several tens of inverse nanometers. The model is of interest to a fast-routine identification of curved-graphene nanoparticles in carbonaceous materials.

  18. Use of shell model calculations in R-matrix studies of neutron-induced reactions

    International Nuclear Information System (INIS)

    Knox, H.D.

    1986-01-01

    R-matrix analyses of neutron-induced reactions for many of the lightest p-shell nuclei are difficult due to a lack of distinct resonance structure in the reaction cross sections. Initial values for the required R-matrix parameters, E,sub(lambda) and γsub(lambdac) for states in the compound system, can be obtained from shell model calculations. In the present work, the results of recent shell model calculations for the lithium isotopes have been used in R-matrix analyses of 6 Li+n and 7 Li+n reactions for E sub(n) 7 Li and 8 Li on the 6 Li+n and 7 Li+n reaction mechanisms and cross sections are discussed. (author)

  19. Efficient Finite Element Models for Calculation of the No-load losses of the Transformer

    Directory of Open Access Journals (Sweden)

    Kamran Dawood

    2017-10-01

    Full Text Available Different transformer models are examined for the calculation of the no-load losses using finite element analysis. Two-dimensional and three-dimensional finite element analyses are used for the simulation of the transformer. Results of the finite element method are also compared with the experimental results. The Result shows that 3-dimensional provide high accuracy as compared to the 2 dimensional full and half model. However, the 2-dimensional half model is the less time-consuming method as compared to the 3 and 2-dimensional full model. Simulation time duration taken by the different models of the transformer is also compared. The difference between the 3-dimensional finite element method and experimental results are less than 3%. These numerical methods can help transformer designers to minimize the development of the prototype transformers.

  20. Implications of imprecision in kinetic rate data for photochemical model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, R.W.; Thompson, A.M. [National Aeronautics and Space Administration, Greenbelt, MD (United States). Goddard Space Flight Center

    1997-12-31

    Evaluation of uncertainties in photochemical model calculations is of great importance to scientists performing assessment modeling. A major source of uncertainty is the measurement imprecision inherent in photochemical reaction rate data that modelers rely on. A rigorous method of evaluating the impact of data imprecision on computational uncertainty is the study of error propagation using Monte Carlo techniques. There are two problems with the current implementation of the Monte Carlo method. First, there is no satisfactory way of accounting for the variation of imprecision with temperature in 1, 2, or 3D models; second, due to its computational expense, it is impractical in 3D model studies. These difficulties are discussed. (author) 4 refs.

  1. Isotope-hydrological models and calculational methods for investigation of groundwater flow

    International Nuclear Information System (INIS)

    Marton, L.

    1982-01-01

    Recharge of groundwater through a semi-confining bed is a typical hydrogeological phenomenon in quaternary deposits which are elevated to a lesser or greater degree above the surroundings. A simple hydrological model has been introduced in which the aquifer is recharged only by precipitation through a semi-permeable layer. For applying the model, it is necessary to know the age of the water or the radioisotope concentrations in some sections of the ground-water flow system. On the basis of the age, the hydraulic conductivity of the aquifer and of the semiconfining bed and the steady rate of infiltration can be calculated. Other hydraulic parameters can be determined with the help of a mathemathical model worked out by Freeze and Witherspoon. The hydrological and mathemathical models are inversely used and are complementary. The reliability and applicability of the hydrological model has been proved in practice and good results were gained in hydrogeological research carried out in Hungary. (author)

  2. A numerical model for calculating vibration from a railway tunnel embedded in a full-space

    Science.gov (United States)

    Hussein, M. F. M.; Hunt, H. E. M.

    2007-08-01

    Vibration generated by underground railways transmits to nearby buildings causing annoyance to inhabitants and malfunctioning to sensitive equipment. Vibration can be isolated through countermeasures by reducing the stiffness of railpads, using floating-slab tracks and/or supporting buildings on springs. Modelling of vibration from underground railways has recently gained more importance on account of the need to evaluate accurately the performance of vibration countermeasures before these are implemented. This paper develops an existing model, reported by Forrest and Hunt, for calculating vibration from underground railways. The model, known as the Pipe-in-Pipe model, has been developed in this paper to account for anti-symmetrical inputs and therefore to model tangential forces at the tunnel wall. Moreover, three different arrangements of supports are considered for floating-slab tracks, one which can be used to model directly-fixed slabs. The paper also investigates the wave-guided solution of the track, the tunnel, the surrounding soil and the coupled system. It is shown that the dynamics of the track have significant effect on the results calculated in the wavenumber-frequency domain and therefore an important role on controlling vibration from underground railways.

  3. Influence of polarization and a source model for dose calculation in MRT

    Energy Technology Data Exchange (ETDEWEB)

    Bartzsch, Stefan, E-mail: s.bartzsch@dkfz.de; Oelfke, Uwe [The Institute of Cancer Research, 15 Cotswold Road, Belmont, Sutton, Surrey SM2 5NG, United Kingdom and Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, D-69120 Heidelberg (Germany); Lerch, Michael; Petasecca, Marco [Centre for Medical Radiation Physics, University of Wollongong, Northfields Avenue, Wollongong 2522 (Australia); Bräuer-Krisch, Elke [European Synchrotron Radiation Facility, 6 Rue Jules Horowitz, 38000 Grenoble (France)

    2014-04-15

    Purpose: Microbeam Radiation Therapy (MRT), an alternative preclinical treatment strategy using spatially modulated synchrotron radiation on a micrometer scale, has the great potential to cure malignant tumors (e.g., brain tumors) while having low side effects on normal tissue. Dose measurement and calculation in MRT is challenging because of the spatial accuracy required and the arising high dose differences. Dose calculation with Monte Carlo simulations is time consuming and their accuracy is still a matter of debate. In particular, the influence of photon polarization has been discussed in the literature. Moreover, it is controversial whether a complete knowledge of phase space trajectories, i.e., the simulation of the machine from the wiggler to the collimator, is necessary in order to accurately calculate the dose. Methods: With Monte Carlo simulations in the Geant4 toolkit, the authors investigate the influence of polarization on the dose distribution and the therapeutically important peak to valley dose ratios (PVDRs). Furthermore, the authors analyze in detail phase space information provided byMartínez-Rovira et al. [“Development and commissioning of a Monte Carlo photon model for the forthcoming clinical trials in microbeam radiation therapy,” Med. Phys. 39(1), 119–131 (2012)] and examine its influence on peak and valley doses. A simple source model is developed using parallel beams and its applicability is shown in a semiadjoint Monte Carlo simulation. Results are compared to measurements and previously published data. Results: Polarization has a significant influence on the scattered dose outside the microbeam field. In the radiation field, however, dose and PVDRs deduced from calculations without polarization and with polarization differ by less than 3%. The authors show that the key consequences from the phase space information for dose calculations are inhomogeneous primary photon flux, partial absorption due to inclined beam incidence outside

  4. Fuel models and results from the TRAC-PF1/MIMAS TMI-2 accident calculation

    International Nuclear Information System (INIS)

    Schwegler, E.C.; Maudlin, P.J.

    1983-01-01

    A brief description of several fuel models used in the TRAC-PF1/MIMAS analysis of the TMI-2 accident is presented, and some of the significant fuel-rod behavior results from this analysis are given. Peak fuel-rod temperatures, oxidation heat production, and embrittlement and failure behavior calculated for the TMI-2 accident are discussed. Other aspects of fuel behavior, such as cladding ballooning and fuel-cladding eutectic formation, were found not to significantly affect the accident progression

  5. ANLECIS-1: Version of ANLECIS Program for Calculations with the Asymetric Rotational Model

    International Nuclear Information System (INIS)

    Lopez Mendez, R.; Garcia Moruarte, F.

    1986-01-01

    A new modified version of the ANLECIS Code is reported. This version allows to fit simultaneously the cross section of the direct process by the asymetric rotational model, and the cross section of the compound nucleus process by the Hauser-Feshbach formalism with the modern statistical corrections. The calculations based in this version show a dependence of the compound nucleus cross section with respect to the asymetric parameter γ. (author). 19 refs

  6. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    Science.gov (United States)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep

    2018-04-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.

  7. A general model for preload calculation and stiffness analysis for combined angular contact ball bearings

    Science.gov (United States)

    Zhang, Jinhua; Fang, Bin; Hong, Jun; Wan, Shaoke; Zhu, Yongsheng

    2017-12-01

    The combined angular contact ball bearings are widely used in automatic, aerospace and machine tools, but few researches on the combined angular contact ball bearings have been reported. It is shown that the preload and stiffness of combined bearings are mutual influenced rather than simply the superposition of multiple single bearing, therefore the characteristic calculation of combined bearings achieved by coupling the load and deformation analysis of a single bearing. In this paper, based on the Jones quasi-static model and stiffness analytical model, a new iterative algorithm and model are proposed for the calculation of combined bearings preload and stiffness, and the dynamic effects include centrifugal force and gyroscopic moment have to be considered. It is demonstrated that the new method has general applicability, the preload factors of combined bearings are calculated according to the different design preloads, and the static and dynamic stiffness for various arrangements of combined bearings are comparatively studied and analyzed, and the influences of the design preload magnitude, axial load and rotating speed are discussed in detail. Besides, the change rule of dynamic contact angles of combined bearings with respect to the rotating speed is also discussed. The results show that bearing arrangement modes, rotating speed and design preload magnitude have a significant influence on the preload and stiffness of combined bearings. The proposed formulation provides a useful tool in dynamic analysis of the complex bearing-rotor system.

  8. First-principles calculations, experimental study, and thermodynamic modeling of the Al-Co-Cr system.

    Directory of Open Access Journals (Sweden)

    Xuan L Liu

    Full Text Available The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT and phase-equilibria experiments that led to X-ray diffraction (XRD and electron probe micro-analysis (EPMA measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS calculations predict a large bcc-A2 (disordered/B2 (ordered miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings.

  9. Referent 3D solid tumour model and absorbed dose calculations at cellular level in radionuclide therapy

    International Nuclear Information System (INIS)

    Spaic, R.; Ilic, R.; Petrovic, B.; Dragovic, M.; Toskovic, F.

    2007-01-01

    An average absorbed dose of the tumour calculated by the MIRD formalism has not always a good correlation with the clinical response. The basic assumption of the MIRD schema is that a uniform spatial dose distribution is opposite to heterogeneity of intratumoral distribution of the administered radionuclide which can lead to a spatial nonuniformity of the absorbed dose. Therefore, in clinical practice, an absorbed dose of the tumour at the cellular level has to be calculated. The aim of this study is to define a referent 3D solid tumour model and using the direct Monte Carlo radiation transport method to calculate: a) absorbed fraction, b) spatial 3D absorbed dose distribution, c) absorbed dose and relative absorbed dose of cells or clusters of cells, and d) differential and accumulated dose volume histograms. A referent 3D solid tumour model is defined as a sphere which is randomly filled with cells and necrosis with defined radii and volumetric density. Radiolabelling of the tumour is defined by intracellular to extracellular radionuclide concentration and radio-labelled cell density. All these parameters are input data for software which generates a referent 3D solid tumour model. The modified FOTELP Monte Carlo code was used on this model for simulation study with beta emitters which were applied on the tumour. The absorbed fractions of Cu-67, I- 131, Re-188 and Y-90 were calculated for different tumour sphere masses and radii. Absorbed doses of cells and spatial distributions of the absorbed doses in the referent 3D solid tumour were calculated for radionuclides I-131 and Y-90. Dose scintigram or voxel presentation of absorbed dose distributions showed higher homogeneity for Y-90 than for I-131. A differential dose volume histogram, or spectrum, of the relative absorbed dose of cells, was much closer to the average absorbed dose of the tumour for Y-90 than I-131. An accumulated dose volume histogram showed that most tumour cells received a lower dose than

  10. Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.

    Science.gov (United States)

    Stiglic, Gregor; Pajnkihar, Majda

    2015-01-01

    Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were primarily developed as classical paper questionnaires.

  11. Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes and 47% (47% for pre-diabetes persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734 and an average 34% (48% persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47% lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001, with a significantly higher AUC (p < 0.001 of 0.774 (0.740 for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that

  12. The O(N) model: Calculation of the effective potential for arbitrary values of N

    International Nuclear Information System (INIS)

    Casalbuoni, R.; Castellani, E.; De Curtis, S.; Florence Univ.

    1983-01-01

    By using the technique of the effective action for composition operators, we present a calculation of the effective potential of the O(N) scalar model for arbitrary values of N. The potential is given as a truncation of a loop expansion, and reproduces the known results of the N->infinite limit. The potential shows a symmetry breaking for ''small'' values of the ''classical fields'', whereas it shows Landau-type singularities in the region of ''large'' value. However, these singularities are clearly an artifact of our approximation and the model is perfectly consistent in the low energy regime. (orig.)

  13. Parameter Estimation of a Plucked String Synthesis Model Using a Genetic Algorithm with Perceptual Fitness Calculation

    Directory of Open Access Journals (Sweden)

    Riionheimo Janne

    2003-01-01

    Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.

  14. Model for the calculation of pressure loss through heavy fuel oil transfer pipelines

    Directory of Open Access Journals (Sweden)

    Hector Luis Laurencio-Alfonso,

    2012-10-01

    Full Text Available Considering the limitations of methodologies and empirical correlations in the evaluation of simultaneous effects produced by viscous and mix strength during the transfer of fluids through pipelines, this article presents the functional relationships that describe the pressure variations for the non-Newtonian fuel oil flowrate. The experimental study was conducted based on a characterization of the rheological behavior of fuel oil and modeling for a pseudoplastic behavior. The resulting model describes temperature changes, viscous friction effects and the effects of blending flow layers; which is therefore the basis of calculation for the selection, evaluation and rationalization of transport of heavy fuel oil by pipelines.

  15. Optimal electricity price calculation model for retailers in a deregulated market

    International Nuclear Information System (INIS)

    Yusta, J.M.; Dominguez-Navarro, J.A.; Ramirez-Rosado, I.J.; Perez-Vidal, J.M.

    2005-01-01

    The electricity retailing, a new business in deregulated electric power systems, needs the development of efficient tools to optimize its operation. This paper defines a technical-economic model of an electric energy service provider in the environment of the deregulated electricity market in Spain. This model results in an optimization problem, for calculating the optimal electric power and energy selling prices that maximize the economic profits obtained by the provider. This problem is applied to different cases, where the impact on the profits of several factors, such as the price strategy, the discount on tariffs and the elasticity of customer demand functions, is studied. (Author)

  16. 40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment... model type. (a) Base level. 5-cycle fuel economy values for a base level are calculated from vehicle... any model type value is calculated for a label value. (iii) The provisions of this paragraph (a)(3...

  17. Activity-based costing: a practical model for cost calculation in radiotherapy.

    Science.gov (United States)

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  18. Calculation of electrical potentials on the surface of a realistic head model by finite differences

    International Nuclear Information System (INIS)

    Lemieux, L.; McBride, A.; Hand, J.W.

    1996-01-01

    We present a method for the calculation of electrical potentials at the surface of realistic head models from a point dipole generator based on a 3D finite-difference algorithm. The model was validated by comparing calculated values with those obtained algebraically for a three-shell spherical model. For a 1.25 mm cubic grid size, the mean error was 4.9% for a superficial dipole (3.75 mm from the inner surface of the skull) pointing in the radial direction. The effect of generator discretization and node spacing on the accuracy of the model was studied. Three values of the node spacing were considered: 1, 1.25 and 1.5 mm. The mean relative errors were 4.2, 6.3 and 9.3%, respectively. The quality of the approximation of a point dipole by an array of nodes in a spherical neighbourhood did not depend significantly on the number of nodes used. The application of the method to a conduction model derived from MRI data is demonstrated. (author)

  19. The truth is out there: measured, calculated and modelled benthic fluxes.

    Science.gov (United States)

    Pakhomova, Svetlana; Protsenko, Elizaveta

    2016-04-01

    In a modern Earth science there is a great importance of understanding the processes, forming the benthic fluxes as one of element sources or sinks to or from the water body, which affects the elements balance in the water system. There are several ways to assess benthic fluxes and here we try to compare the results obtained by chamber experiments, calculated from porewater distributions and simulated with model. Benthic fluxes of dissolved elements (oxygen, nitrogen species, phosphate, silicate, alkalinity, iron and manganese species) were studied in the Baltic and Black Seas from 2000 to 2005. Fluxes were measured in situ using chamber incubations (Jch) and at the same time sediment cores were collected to assess the porewater distribution at different depths to calculate diffusive fluxes (Jpw). Model study was carried out with benthic-pelagic biogeochemical model BROM (O-N-P-Si-C-S-Mn-Fe redox model). It was applied to simulate biogeochemical structure of the water column and upper sediment and to assess the vertical fluxes (Jmd). By the behaviour at the water-sediment interface all studied elements can be divided into three groups: (1) elements which benthic fluxes are determined by the concentrations gradient only (Si, Mn), (2) elements which fluxes depend on redox conditions in the bottom water (Fe, PO4, NH4), and (3) elements which fluxes are strongly connected with organic matter fate (O2, Alk, NH4). For the first group it was found that measured fluxes are always higher than calculated diffusive fluxes (1.5advantage of a more accurate calculation of diffusive fluxes especially for redox dependent elements. Model results showed that in 50 cm above the sediment vertical fluxes are changing largely while in chamber experiments they are averaged. As a result, each of the methods has its disadvantages and the main facing us question is - which value should be taken for calculation the balance? This research is funded by VISTA - a basic research program and

  20. A Contrast on Conductor Galloping Amplitude Calculated by Three Mathematical Models with Different DOFs

    Directory of Open Access Journals (Sweden)

    Bin Liu

    2014-01-01

    Full Text Available It is pivotal to find an effective mathematical model revealing the galloping mechanism. And it is important to compare the difference between the existing mathematical models on the conductor galloping. In this paper, the continuum cable model for transmission lines was proposed using the Hamilton principle. Discrete models of one DOF, two DOFs, and three DOFs were derived from the continuum model by using the Garlekin method. And the three models were compared by analyzing the galloping vertical amplitude and torsional angle with different influence factors. The influence factors include wind velocity, flow density, span length, damping ratio, and initial tension. The three-DOF model is more accurate at calculating the galloping characteristics than the other two models, but the one-DOF and two-DOF models can also present the trend of galloping amplitude change from the point view of qualitative analysis. And the change of the galloping amplitude relative to the main factors was also obtained, which is very essential to the antigalloping design applied in the actual engineering.

  1. A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear

    International Nuclear Information System (INIS)

    Onjun, T; Siriburanon, T; Onjun, O

    2008-01-01

    A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%

  2. An explicit solution for calculating optimum spawning stock size from Ricker's stock recruitment model.

    Science.gov (United States)

    Scheuerell, Mark D

    2016-01-01

    Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.

  3. Propagation of uncertainty in system parameters of a LWR model by sampling MCNPX calculations - Burnup analysis

    International Nuclear Information System (INIS)

    Campolina, D. de A. M.; Lima, C.P.B.; Veloso, M.A.F.

    2013-01-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately

  4. GPU-based ultra-fast dose calculation using a finite size pencil beam model

    Science.gov (United States)

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.

    2009-10-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  5. Development of sump model for containment hydrogen distribution calculations using CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Ravva, Srinivasa Rao, E-mail: srini@aerb.gov.in [Indian Institute of Technology-Bombay, Mumbai (India); Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India); Iyer, Kannan N. [Indian Institute of Technology-Bombay, Mumbai (India); Gaikwad, A.J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India)

    2015-12-15

    Highlights: • Sump evaporation model was implemented in FLUENT using three different approaches. • Validated the implemented sump evaporation models against TOSQAN facility. • It was found that predictions are in good agreement with the data. • Diffusion based model would be able to predict both condensation and evaporation. - Abstract: Computational Fluid Dynamics (CFD) simulations are necessary for obtaining accurate predictions and local behaviour for carrying out containment hydrogen distribution studies. However, commercially available CFD codes do not have all necessary models for carrying out hydrogen distribution analysis. One such model is sump or suppression pool evaporation model. The water in the sump may evaporate during the accident progression and affect the mixture concentrations in the containment. Hence, it is imperative to study the sump evaporation and its effect. Sump evaporation is modelled using three different approaches in the present work. The first approach deals with the calculation of evaporation flow rate and sump liquid temperature and supplying these quantities through user defined functions as boundary conditions. In this approach, the mean values of the domain are used. In the second approach, the mass, momentum, energy and species sources arise due to the sump evaporation are added to the domain through user defined functions. Cell values adjacent to the sump interface are used in this. Heat transfer between gas and liquid is calculated automatically by the code itself. However, in these two approaches, the evaporation rate was computed using an experimental correlation. In the third approach, the evaporation rate is directly estimated using diffusion approximation. The performance of these three models is compared with the sump behaviour experiment conducted in TOSQAN facility.Classification: K. Thermal hydraulics.

  6. Calculations radiobiological using the quadratic lineal model in the use of the medium dose rate absorbed in brachytherapy. Pt. 3

    International Nuclear Information System (INIS)

    2002-01-01

    Calculations with the quadratic lineal model for medium rate using the equation dose-effect. Several calculations for system of low dose rate brachytherapy plus teletherapy, calculations for brachytherapy with medium dose rate together with teletherapy, dose for fraction and the one numbers of fractions in medium rate

  7. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  8. MODELING AND CALCULATION OF FLOW AMPLIFIER PARAMETERS IN STEERING OF HEAVY TRUCKS

    Directory of Open Access Journals (Sweden)

    V. P. Avtushko

    2008-01-01

    Full Text Available The paper analyzes prospects pertaining to development of methods for dynamic calculation of monitoring hydraulic units with various types of relations.  Calculated diagram of steering hydraulic drive with flow amplifier and turning cylinder has been given in the paper and its dynamic model has been developed. A hydraulic drive is considered as a system with lumped parameters. It is supposed that properties of working fluid are unchangeable during transient process; leakages and cavitations do not occur; fluid can be pressed; resistance of service drain line is taken into account. Model has been developed with due account of resistance of manifolds and internal channels of flow amplifier, hydrodynamic forces, that influence on amplifier control valves, and friction forces of movable elements. Multi-variant dynamic calculation has been done and some results of the investigations are presented in the paper. The paper also contains analysis that shows influence of various design and component parameters of flow amplifier on the drive dynamics. 

  9. A model expansion criterion for treating surface topography in ray path calculations using the eikonal equation

    International Nuclear Information System (INIS)

    Ma, Ting; Zhang, Zhongjie

    2014-01-01

    Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)

  10. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  11. The application of removal coefficients for viruses in different wastewater treatment processes calculated using stochastic modelling.

    Science.gov (United States)

    Dias, Edgard; Ebdon, James; Taylor, Huw

    2015-01-01

    This study proposes that calculating and interpreting removal coefficients (K20) for bacteriophages in activated sludge (AS) and trickling filter (TF) systems using stochastic modelling may provide important information that may be used to estimate the removal of phages in such systems using simplified models. In order to achieve this, 14 samples of settled wastewater and post-secondary sedimentation wastewater were collected every 2 weeks, over a 6-month period (May to November), from two AS and two TF systems situated in southern England. Initial results have demonstrated that the removal of somatic coliphages in both AS and TF systems is considerably higher than that of F-RNA coliphages, and that AS more effectively removes both phage groups than TF. The results have also demonstrated that K20 values for phages in AS are higher than in TF, which could be justified by the higher removal rates observed in AS and the models assumed for both systems. The research provides a suggested framework for calculating and predicting removal rates of pathogens and indicator organisms in wastewater treatment systems using simplified models in order to support integrated water and sanitation safety planning approaches to human health risk management.

  12. A simplified model for calculating atmospheric radionuclide transport and early health effects from nuclear reactor accidents

    International Nuclear Information System (INIS)

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1995-01-01

    During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences

  13. Contribution to the prompt fission neutron spectrum modeling. Uncertainty propagation on a vessel fluence calculation

    International Nuclear Information System (INIS)

    Berge, Leonie

    2015-01-01

    The prompt fission neutron spectrum (PFNS) is very important for various nuclear physics applications. Yet, except for the 252 Cf spontaneous fission spectrum which is an international standard and is used for metrology purposes, the PFNS is still poorly known for most of the fissioning nuclides. In particular, few measurements exist for the fast fission spectrum (induced by a neutron whose energy exceeds about 100 keV), and the international evaluations show strong discrepancies. There are also very few data about covariances associated to the various PFNS evaluations. In this work we present three aspects of the PFNS evaluation. The first aspect is about the spectrum modeling with the FIFRELIN code, developed at CEA Cadarache, which simulates the fission fragment de-excitation by successive emissions of prompt neutrons and gammas, via the Monte-Carlo method. This code aims at calculating all fission observables in a single consistent calculation, starting from fission fragment distributions (mass, kinetic energy and spin). FIFRELIN is therefore more predictive than the analytical models used to describe the spectrum. A study of model parameters which impact the spectrum, like the fragment level density parameter, is presented in order to better reproduce the spectrum. The second aspect of this work is about the evaluation of the PFNS and its covariance matrix. We present a methodology to produce this evaluation in a rigorous way, with the CONRAD code, developed at CEA Cadarache. This implies modeling the spectrum through simple models, like the Madland-Nix model which is the most commonly used in the evaluations, by adjusting the model parameters to reproduce experimental data. The covariance matrix arises from the rigorous propagation of the sources of uncertainty involved in the calculation. In particular, the systematic uncertainties arising from the experimental set-up are propagated via a marginalization technique. The marginalization allows propagating

  14. Steam reforming of methane over Pt/Rh based wire mesh catalyst in single channel reformer for small scale syngas production

    DEFF Research Database (Denmark)

    Sigurdsson, Haftor Örn; Kær, Søren Knudsen

    2012-01-01

    of a catalytic parallel plate type heat exchanger (CPHE) reformer stack, where coated Pt/Rh based wire mesh is used as a catalyst. Heat is supplied to the endothermic reaction with infrared electric heaters. All the experiments were performed under atmospheric pressure and at stable operating conditions......The purpose of this study is to investigate a small scale steam methane reformer for syngas production for a micro combined heat and power (mCPH) unit under different operational conditions. The study presents an experimental analysis of the performance of a specially built single channel...... to evaluate the effect of flow maldistribution in a CPHE reformer stack on the CH4 conversion and H2 yield....

  15. Design and Construction of an Autonomous Low-Cost Pulse Height Analyzer and a Single Channel Analyzer for Mössbauer Spectroscopy

    Science.gov (United States)

    Velásquez, A. A.; Gancedo, J. R.; Trujillo, J. M.; Morales, A. L.; Tobón, J. E.; Reyes, L.

    2005-04-01

    A multichannel analyzer (MCA) and a single channel-analyzer (SCA) for Mössbauer spectrometry application have been designed and built. Both systems include low-cost digital and analog components. A microcontroller manages, either in PHA or MCS mode, the data acquisition, data storage and setting of the pulse discriminator limits. The user can monitor the system from an external PC through the serial port with the RS232 communication protocol. A graphic interface made with the LabVIEW software allows the user to adjust digitally the lower and upper limits of the pulse discriminator, and to visualize as well as save the PHA spectra in a file. The system has been tested using a 57Co radioactive source and several iron compounds, yielding satisfactory results. The low cost of its design, construction and maintenance make this equipment an attractive choice when assembling a Mössbauer spectrometer.

  16. Magnetic field shimming of a permanent magnet using a combination of pieces of permanent magnets and a single-channel shim coil for skeletal age assessment of children.

    Science.gov (United States)

    Terada, Y; Kono, S; Ishizawa, K; Inamura, S; Uchiumi, T; Tamada, D; Kose, K

    2013-05-01

    We adopted a combination of pieces of permanent magnets and a single-channel (SC) shim coil to shim the magnetic field in a magnetic resonance imaging system dedicated for skeletal age assessment of children. The target magnet was a 0.3-T open and compact permanent magnet tailored to the hand imaging of young children. The homogeneity of the magnetic field was first improved by shimming using pieces of permanent magnets. The residual local inhomogeneity was then compensated for by shimming using the SC shim coil. The effectiveness of the shimming was measured by imaging the left hands of human subjects and evaluating the image quality. The magnetic resonance images for the child subject clearly visualized anatomical structures of all bones necessary for skeletal age assessment, demonstrating the usefulness of combined shimming. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. An equivalent circuit model and power calculations for the APS SPX crab cavities.

    Energy Technology Data Exchange (ETDEWEB)

    Berenc, T. (Accelerator Systems Division (APS))

    2012-03-21

    An equivalent parallel resistor-inductor-capacitor (RLC) circuit with beam loading for a polarized TM110 dipole-mode cavity is developed and minimum radio-frequency (rf) generator requirements are calculated for the Advanced Photon Source (APS) short-pulse x-ray (SPX) superconducting rf (SRF) crab cavities. A beam-loaded circuit model for polarized TM110 mode crab cavities was derived. The single-cavity minimum steady-state required generator power has been determined for the APS SPX crab cavities for a storage ring current of 200mA DC current as a function of external Q for various vertical offsets including beam tilt and uncontrollable detuning. Calculations to aid machine protection considerations were given.

  18. Models optimization for the pressure drop calculation in two-phase flow cooled bundle

    International Nuclear Information System (INIS)

    Ladeira, L.C.D.; Rezende, H.C.

    1994-01-01

    The analysis of two-phase tests, performed in a mock-up of nuclear fuel element, to verify the applicability of existing calculation models to determine the pressure drop is presented. The tests were performed using Reynolds Number in the range from 4 x 10 4 to 1.6 x 10 5 , with heat flux up to 105 W/cm 2 , and three different levels of pressure: 2.0, 6.0 and 10.0 bar. The tests results were used to optimize the bubble detaching point in two correlations (Bowing and Lelouche-Zolotar) in order to obtain the subcooled void fraction. Comparison between measured and calculated results has shown that the pressure drop,in 96% of the tests, was reproduced within +- 16% when using the Bowing correlation, and +- 9% with the Lellouche-Zolotar one. (author)

  19. Calculation of hydrogen outgassing rate of LHD by recombination limited model

    International Nuclear Information System (INIS)

    Akaishi, K.; Nakasuga, M.

    2002-04-01

    To simulate hydrogen outgassing in the plasma vacuum vessel of LHD, the recombination limited model is presented, where the time evolution of hydrogen concentration in the wall of the plasma vacuum vessel is described by a one-dimensional diffusion equation. The hydrogen outgassing rates when the plasma vacuum vessel is pumped down at room temperature and baked at 100 degC are calculated as a function of pumping time. The calculation shows that the hydrogen outgassing rate of the plasma vacuum vessel can be reduced at least by one order of magnitude due to pumping and baking. This prediction is consistent with the recent result of outgassing reduction observed in the pumping-down and baking of the plasma vacuum vessel in LHD. (author)

  20. Raman Spectroscopy and Ab-Initio Model Calculations on Ionic Liquids

    DEFF Research Database (Denmark)

    Berg, Rolf W.

    2007-01-01

    A review of the recent developments in the study and understanding of room temperature ionic liquids are given. An intimate picture of how and why these liquids are not crystals at ambient conditions is attempted, based on evidence from crystallographical results combined with vibrational...... that the structural resolving power of Raman spectroscopy will be appreciated by the reader. It is of remarkable use on crystals of known different conformations and on the corresponding liquids, especially in combination with modern quantum mechanics calculations. It is hoped that these interdisciplinary methods...... spectroscopy and ab-initio molecular orbital calculations. A discussion is given, based mainly on some recent FT-Raman spectroscopic results on the model ionic liquid system of 1-butyl-3-methylimidazolium ([C4mim][X]) salts. The rotational isomerism of the [C4mim]þ cation is described: the presence of anti...

  1. Theoretical modeling of zircon's crystal morphology according to data of atomistic calculations

    Science.gov (United States)

    Gromalova, Natalia; Nikishaeva, Nadezhda; Eremin, Nikolay

    2017-04-01

    Zircon is an essential mineral that is used in the U-Pb dating. Moreover, zircon is highly resistant to radioactive exposure. It is of great interest in solving both fundamental and applied problems associated with the isolation of high-level radioactive waste. There is significant progress in forecasting of the most energetically favorable crystal structures at the present time. Unfortunately, the theoretical forecast of crystal morphology at high technological level is under-explored nowadays, though the estimation of crystal equilibrium habit is extremely important in studying the physical and chemical properties of new materials. For the first time, the thesis about relation of the equilibrium shape of a crystal with its crystal structure was put forward in the works by O.Brave. According to it, the idealized habit is determined in the simplest case by a correspondence with the reticular densities Rhkl of individual faces. This approach, along with all subsequent corrections, does not take into account the nature of atoms and the specific features of the chemical bond in crystals. The atomistic calculations of crystal surfaces are commonly performed using the energetic characteristics of faces, namely, the surface energy (Esurf), which is a measure of the thermodynamic stability of the crystal face. The stable crystal faces are characterized by small positive values of Esurf. As we know from our previous research (Gromalova et al.,2015) one of the constitutive factors affecting the value of the surface energy in calculations is a choice of potentials model. In this regard, we studied several sets of parameters of atomistic interatomic potentials optimized previously. As the first test model («Zircon 1») were used sets of interatomic potentials of interaction Zr-O, Si-O and O-O in the form of Buckingham potentials. To improve playback properties of zircon additionally used Morse potential for a couple of Zr-Si, as well as the three-particle angular harmonic

  2. Spatial Resolution Effect on Forest Road Gradient Calculation and Erosion Modelling

    Science.gov (United States)

    Cao, L.; Elliot, W.

    2017-12-01

    Road erosion is one of the main sediment sources in a forest watershed and should be properly evaluated. With the help of GIS technology, road topography can be determined and soil loss can be predicted at a watershed scale. As a vector geographical feature, the road gradient should be calculated following road direction rather than hillslope direction. This calculation might be difficult with a coarse (30-m) DEM which only provides the underlying topography information. This study was designed to explore the effect of road segmentation and DEM resolution on the road gradient calculation and erosion prediction at a watershed scale. The Water Erosion Prediction Project (WEPP) model was run on road segments of 9 lengths ranging from 40m to 200m. Road gradient was calculated from three DEM data sets: 1m LiDAR, and 10m and 30m USGS DEMs. The 1m LiDAR DEM calculated gradients were very close to the field observed road gradients, so we assumed the 1m LiDAR DEM predicted the true road gradient. The results revealed that longer road segments skipped detail topographical undulations and resulted in lower road gradients. Coarser DEMs computed steeper road gradients as larger grid cells covered more adjacent areas outside road resulting in larger elevation differences. Field surveyed results also revealed that coarser DEM might result in more gradient deviation in a curved road segment when it passes through a convex or concave slope. As road segment length increased, the gradient difference between three DEMs was reduced. There were no significant differences between road gradients of different segment lengths and DEM resolution when segments were longer than 100m. For long segments, the 10m DEM calculated road gradient was similar to the 1m LiDAR gradient. When evaluating the effects of road segment length, the predicted erosion rate decreased with increasing length when road gradient was less than 3%. In cases where the road gradients exceed 3% and rill erosion dominates

  3. Calculating the renormalisation group equations of a SUSY model with Susyno

    Science.gov (United States)

    Fonseca, Renato M.

    2012-10-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features

  4. Modelling of pharmaceutical residues in Australian sewage by quantities of use and fugacity calculations.

    Science.gov (United States)

    Khan, Stuart J; Ongerth, Jerry E

    2004-01-01

    A conceptual model is presented for determining which currently prescribed pharmaceutical compounds are most likely to be found in sewage, and for estimating their concentrations, both in raw sewage and after successive stages of secondary sewage treatment. A ranking of the "top-50" pharmaceutical compounds (by total mass dispensed) in Australia over the 1998 calendar year was prepared. Information on the excretion ratios and some metabolites of the pharmaceuticals enabled prediction of the overall rates of excretion into Australian sewage. Mass-balance and fugacity modelling, applied to sewage generation and to a sewage treatment plant, allowed calculation of predicted concentrations of the compounds in raw, primary and secondary treated sewage effluents. Twenty nine of the modelled pharmaceutical residuals were predicted to be present in raw sewage influent at concentrations of 1 microgl(-1) or greater. Twenty of the compounds were predicted to remain in secondary effluent at concentrations of 1 microgl(-1) or greater.

  5. Cleanup techniques for Finnish urban environments and external doses from 137Cs - modelling and calculations

    International Nuclear Information System (INIS)

    Moring, M.; Markkula, M.L.

    1997-03-01

    The external doses under various radioactive deposition conditions are assessed and the efficiencies of some simple decontamination techniques (grass cutting, vacuum sweeping, hosing of paved surfaces and roofs, and felling trees) are compared in the study. The present model has been constructed for the Finnish conditions and housing areas, using 137 Cs transfer data from the Nordic and Central European studies and models. The compartment model concerns behaviour and decontamination of 137 Cs in the urban environment under summer conditions. Doses to man have been calculated for wet (light rain) and dry deposition in four typical Finnish building areas: single-family wooden houses, brick terraced-houses, blocks of flats and urban office buildings. (26 refs.)

  6. Macroscopic calculational model of fission gas release from water reactor fuels

    International Nuclear Information System (INIS)

    Uchida, Masaki

    1993-01-01

    Existing models for estimating fission gas release rate usually have fuel temperature as independent variable. Use of fuel temperature, however, often brings an excess ambiguity in the estimation because it is not a rigorously definable quantity as a function of heat generation rate and burnup. To derive a mathematical model that gives gas release rate explicitly as a function of design and operational parameters, the Booth-type diffusional model was modified by changing the character of the diffusion constant from physically meaningful quantity into a mere mathematical parameter, and also changing its temperature dependency into power dependency. The derived formula was found, by proper choice of arbitrary constants, to satisfactorily predict the release rates under a variety of irradiation histories up to a burnup of 60,000 MWd/t. For simple power histories, the equation can be solved analytically by defining several transcendental functions, which enables simple calculation of release rate using graphs. (author)

  7. Sample size and power calculations based on generalized linear mixed models with correlated binary outcomes.

    Science.gov (United States)

    Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R

    2008-08-01

    The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.

  8. Development of a transient calculation model for a closed sodium natural circulation loop

    International Nuclear Information System (INIS)

    Chang, Won Pyo; Ha, Kwi Seok; Jeong, Hae Yong; Heo, Sun; Lee, Yong Bum

    2003-09-01

    A natural circulation loop has usually adopted for a Liquid Metal Reactor (LMR) because of its high reliability. Up-rating of the current KALIMER capacity requires an additional PDRC to the existing PVCS to remove its decay heat under an accident. As the system analysis code currently used for LMR in Korea does not feature a stand alone capability to simulate a closed natural circulation loop, it is not eligible to be applied to PDRC. To supplement its limitation, a steady state calculation model had been developed during the first phase, and development of the transient model has successively carried out to close the present study. The developed model will then be coupled with the system analysis code, SSC-K to assess a long term cooling for the new conceptual design. The incompressibility assumption of sodium which allows the circuit to be modeled with a single loop flow, makes the model greatly simplified comparing with LWR. Some thermal-hydraulic models developed during this study can be effectively applied to other system analysis codes which require such component models, and the present development will also contribute to establishment of a code system for the LMR analysis

  9. Set of molecular models based on quantum mechanical ab initio calculations and thermodynamic data.

    Science.gov (United States)

    Eckl, Bernhard; Vrabec, Jadran; Hasse, Hans

    2008-10-09

    A parametrization strategy for molecular models on the basis of force fields is proposed, which allows a rapid development of models for small molecules by using results from quantum mechanical (QM) ab initio calculations and thermodynamic data. The geometry of the molecular models is specified according to the atom positions determined by QM energy minimization. The electrostatic interactions are modeled by reducing the electron density distribution to point dipoles and point quadrupoles located in the center of mass of the molecules. Dispersive and repulsive interactions are described by Lennard-Jones sites, for which the parameters are iteratively optimized to experimental vapor-liquid equilibrium (VLE) data, i.e., vapor pressure, saturated liquid density, and enthalpy of vaporization of the considered substance. The proposed modeling strategy was applied to a sample set of ten molecules from different substance classes. New molecular models are presented for iso-butane, cyclohexane, formaldehyde, dimethyl ether, sulfur dioxide, dimethyl sulfide, thiophene, hydrogen cyanide, acetonitrile, and nitromethane. Most of the models are able to describe the experimental VLE data with deviations of a few percent.

  10. Calculation of the Initial Magnetic Field for Mercury's Magnetosphere Hybrid Model

    Science.gov (United States)

    Alexeev, Igor; Parunakian, David; Dyadechkin, Sergey; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku

    2018-03-01

    Several types of numerical models are used to analyze the interactions of the solar wind flow with Mercury's magnetosphere, including kinetic models that determine magnetic and electric fields based on the spatial distribution of charges and currents, magnetohydrodynamic models that describe plasma as a conductive liquid, and hybrid models that describe ions kinetically in collisionless mode and represent electrons as a massless neutralizing liquid. The structure of resulting solutions is determined not only by the chosen set of equations that govern the behavior of plasma, but also by the initial and boundary conditions; i.e., their effects are not limited to the amount of computational work required to achieve a quasi-stationary solution. In this work, we have proposed using the magnetic field computed by the paraboloid model of Mercury's magnetosphere as the initial condition for subsequent hybrid modeling. The results of the model have been compared to measurements performed by the Messenger spacecraft during a single crossing of the magnetosheath and the magnetosphere. The selected orbit lies in the terminator plane, which allows us to observe two crossings of the bow shock and the magnetopause. In our calculations, we have defined the initial parameters of the global magnetospheric current systems in a way that allows us to minimize paraboloid magnetic field deviation along the trajectory of the Messenger from the experimental data. We have shown that the optimal initial field parameters include setting the penetration of a partial interplanetary magnetic field into the magnetosphere with a penetration coefficient of 0.2.

  11. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  12. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    Science.gov (United States)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  13. Target model of nucleosome particle for track structure calculations and DNA damage modeling

    Czech Academy of Sciences Publication Activity Database

    Michalik, Věslav; Běgusová, Marie

    1994-01-01

    Roč. 66, č. 3 (1994), s. 267-277 ISSN 0955-3002 R&D Projects: GA ČR(CZ) GA204/93/2451; GA AV ČR(CZ) IA135102; GA AV ČR(CZ) IA50405 Keywords : DNA nucleosome * ionizing radiation * theoretical modeling Subject RIV: AQ - Safety, Health Protection, Human - Machine Impact factor: 2.761, year: 1994

  14. Calculating osmotic pressure of xylitol solutions from molality according to UNIFAC model and measuring it with air humidity osmometry.

    Science.gov (United States)

    Yu, Lan; Zhan, Tingting; Zhan, Xiancheng; Wei, Guocui; Tan, Xiaoying; Wang, Xiaolan; Li, Chengrong

    2014-11-01

    The osmotic pressure of xylitol solution at a wide concentration range was calculated according to the UNIFAC model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with UNIFAC model calculations from dilute to saturated solution. Results indicate that air humidity osmometry measurements are comparable to UNIFAC model calculations at a wide concentration range by two one-sided test and multiple testing corrections. The air humidity osmometry is applicable to measure the osmotic pressure and the osmotic pressure can be calculated from the concentration.

  15. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  16. On the mixing model for calculating the temperature fields in nuclear reactor fuel assemblies

    International Nuclear Information System (INIS)

    Mikhin, V.I.; Zhukov, A.V.

    1985-01-01

    One of the alternatives of the mixing model applied for calculating temperature fields in nuclear reactor fuel assemblies,including the fuel assemblies with nonequilibrium energy-release in fuel element cross section, is consistently described. The equations for both constant and variable values of coolant density and heat capacity are obtained. The mixing model is based on a set of mass, heat and longitudinal momentum balance equations. This set is closed by the ratios connecting the unknown values for gaps between fuel elements with the averaged values for neighbouring channels. The ratios to close momentum and heat balance equations, explaining, in particular, the nonequivalent heat and mass, momentum and mass transfer coefficients, are suggested. The balance equations with variable coolant density and heat capacity are reduced to the form coinciding with those of the similar equations with constant values of these parameters. Application of one of the main ratios of the mixing model relating the coolant transverse overflow in the gaps between fuel elements to the averaged coolant rates (flow rates) in the neighbouring channels is mainly limited by the coolant stabilized flow in the fuel assemblies with regular symmetrical arrangement of elements. Mass transfer coefficients for these elements are experimentally determined. The ratio in the paper is also applicable for calculation of fuel assembly temperature fields with a small relative shift of elements

  17. RA3: Application of a calculation model for fuel management with SEFE (Slightly Enriched Fuel Elements)

    International Nuclear Information System (INIS)

    Estryk, G.; Higa, M.

    1993-01-01

    The RA-3 (5 MW, MTR) reactor is mainly utilized to produce radioisotopes (Mo-99, I-131, etc.). It started operating with Low Enrichment Uranium (LEU) in 1990, and spends around 12 fuels per year. Although this consumption is small compared to a nuclear power station. It is important to do a good management of them. The present report describes: - A reactor model to perform the Fuel Shuffling. - Results of fuel management simulations for 2 and a half years of operation. Some features of the calculations can be summarized as follows: 1) A 3D calculation model is used with the code PUMA. It does not have experimental adjustments, except for some approximations in the reflector representation and predicts: power, flux distributions and reactivity of the core in an acceptable way. 2) Comparisons have been made with the measurements done in the commissioning with LEU fuels, and it has also been compared with the empirical method (the previous one) which had been used in the former times of operation with LEU fuel. 3) The number of points of the model is approximately 13500, an it can be run in 80386 personal computer. The present method has been verified as a good tool to perform the simulations for the fuel management of RA-3 reactor. It is expected to produce some economic advantages in: - Achieving a better utilization of the fuels. - Leaving more time of operation for radioisotopes production. The activation measurements through the whole core required by the previous method can be significantly reduced. (author)

  18. Calculation methods for simulation and modelling of nuclear power plant accidents

    International Nuclear Information System (INIS)

    Zurita Centelles, A.

    1985-01-01

    The study deals with the development of calculation procedures for the determination of transient operating conditions in pressurized water reactors, which present the following characteristics: application of largely analytic methods for the description of primary circuit components; strict modular structure of the program for the easy exchange of component models; applicability of different component models according to the applicable case; large valid ranges of application of the thermodynamic variables of state in the transient models; in case of necessity exchange possibility of slip, pressure drop and heat transmission correlations as well as other functions; application in the dynamic components analyses of the anglo-saxon lumped parameter suitable for the system instrumentation. With these calculation procedures it is possible to analyse the effect of a certain selection of transients - up to reaching turbine tripout and reactor emergency shutdown - in the individual primary circuit components. These transients may be generally classified amongst the heat rejection and heat input modifications in the secondary circuit, in the coolant or in the reactivity balance and power distribution. (orig.) [de

  19. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  20. Establishing credibility in the environmental models used for safety and licensing calculations in the nuclear industry

    International Nuclear Information System (INIS)

    Davis, P.A.

    1997-01-01

    Models that simulate the transport and behaviour of radionuclides in the environment are used extensively in the nuclear industry for safety and licensing purposes. They are needed to calculate derived release limits for new and operating facilities, to estimate consequences following hypothetical accidents and to help manage a real emergency. But predictions generated for these purposes are essentially meaningless unless they are accompanied by a quantitative estimate of the confidence that can be placed in them. For example, in an emergency where there has been an accidental release of radioactivity to the atmosphere, decisions based on a validated model with small uncertainties would likely be very different from those based on an untested model, or on one with large uncertainties. This paper begins with a discussion of some general methods for establishing the credibility of model predictions. The focus will be on environmental transport models but the principles apply to models of all kinds. Establishing the credibility of a model is not a trivial task, It involves a number of tasks including face validation, verification, experimental validation and sensitivity and uncertainty analyses. The remainder of the paper will present quantitative results relating to the credibility of environmental transport models. Model formation, choice of parameter values and the influence of the user will all be discussed as sources of uncertainty in predictions. The magnitude of uncertainties that must be expected in various applications of the models will be presented. The examples used throughout the paper are drawn largely from recent work carried out in BIOMOVS and VAMP. (DM)

  1. HENRY'S LAW CALCULATOR

    Science.gov (United States)

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  2. Development of polarizable models for molecular mechanical calculations I: parameterization of atomic polarizability.

    Science.gov (United States)

    Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong

    2011-03-31

    In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, that is, whether the interactions of two atoms that form the bond, bond angle, and dihedral angle are turned off or scaled down. When 1-2 (bonded) and 1-3 (separated by two bonds) interactions are turned off, 1-4 (separated by three bonds) interactions are scaled down, or both, all models including the Applequist model achieved similar performance: the average percentage error (APE) ranges from 1.15 to 1.23%, and the average unsigned error (AUE) ranges from 0.143 to 0.158 Å(3). When the short-range 1-2, 1-3, and full 1-4 terms are taken into account (set D models), the APE ranges from 1.30 to 1.58% for the three Thole models, whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å(3) for the three Thole models, compared with 0.446 Å(3) for the Applequist model. Further assessment using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact have smaller errors than the models developed using this particular data set (set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy.

  3. Calculation and visualisation of future glacier extent in the Swiss Alps by means of hypsographic modelling

    Science.gov (United States)

    Paul, F.; Maisch, M.; Rothenbühler, C.; Hoelzle, M.; Haeberli, W.

    2007-02-01

    The observed rapid glacier wastage in the European Alps during the past 20 years already has strong impacts on the natural environment (rock fall, lake formation) as well as on human activities (tourism, hydro-power production, etc.) and poses several new challenges also for glacier monitoring. With a further increase of global mean temperature in the future, it is likely that Alpine glaciers and the high-mountain environment as an entire system will further develop into a state of imbalance. Hence, the assessment of future glacier geometries is a valuable prerequisite for various impact studies. In order to calculate and visualize in a consistent manner future glacier extent for a large number of individual glaciers (> 100) according to a given climate change scenario, we have developed an automated and simple but robust approach that is based on an empirical relationship between glacier size and the steady-state accumulation area ratio (AAR 0) in the Alps. The model requires digital glacier outlines and a digital elevation model (DEM) only and calculates new glacier geometries from a given shift of the steady-state equilibrium line altitude (ELA 0) by means of hypsographic modelling. We have calculated changes in number, area and volume for 3062 individual glacier units in Switzerland and applied six step changes in ELA 0 (from + 100 to + 600 m) combined with four different values of the AAR 0 (0.5, 0.6, 0.67, 0.75). For an AAR 0 of 0.6 and an ELA 0 rise of 200 m (400 m) we calculate a total area loss of - 54% (- 80%) and a corresponding volume loss of - 50% (- 78%) compared to the 1973 glacier extent. In combination with a geocoded satellite image, the future glacier outlines are also used for automated rendering of perspective visualisations. This is a very attractive tool for communicating research results to the general public. Our study is illustrated for a test site in the Upper Engadine (Switzerland), where landscape changes above timberline play an

  4. A model to calculate effectiveness of a submarine-launched nuclear ASW weapon

    Energy Technology Data Exchange (ETDEWEB)

    Magnoli, D.E.

    1989-06-01

    LLNL's Navy Tactical Applications Group (NTAG) has produced a computer model to calculate the probability of kill of a submarine-launched nuclear ASW standoff-weapon. Because of the uncertainties associated with target position and motion and with weapon delivery, this is a problem appropriately treated statistically. The code is a Monte Carlo's simulation which follows the engagement from localization through optional evasive maneuvers of the target to attack and damage assessment. For a given scenario (weapon characteristics, target characteristics, firing platform depth and hardness, etc.) the code produces a table and ultimately a plot of Pk as a function of range. 2 figs., 1 tab.

  5. Systematical calculation of α decay half-lives with a generalized liquid drop model

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Xiaojun [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Hongfei, E-mail: zhanghongfei@lzu.edu.cn [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Haifei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Royer, G. [Laboratoire Subatech, UMR, IN2P3/CNRS, Université – Ecole des Mines, 44 Nantes (France); Li, Junqing [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China)

    2014-01-15

    A systematic calculation of α decay half-lives is presented for even–even nuclei between Te and Z=118 isotopes. The potential energy governing α decay has been determined within a liquid drop model including proximity effects between the α particle and the daughter nucleus and taking into account the experimental Q value. The α decay half-lives have been deduced from the WKB barrier penetration probability. The α decay half-lives obtained agree reasonably well with the experimental data.

  6. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  7. Flow aerodynamics modeling of an MHD swirl combustor - calculations and experimental verification

    International Nuclear Information System (INIS)

    Gupta, A.K.; Beer, J.M.; Louis, J.F.; Busnaina, A.A.; Lilley, D.G.

    1981-01-01

    This paper describes a computer code for calculating the flow dynamics of constant density flow in the second stage trumpet shaped nozzle section of a two stage MHD swirl combustor for application to a disk generator. The primitive pressure-velocity variable, finite difference computer code has been developed to allow the computation of inert nonreacting turbulent swirling flows in an axisymmetric MHD model swirl combustor. The method and program involve a staggered grid system for axial and radial velocities, and a line relaxation technique for efficient solution of the equations. Tue produces as output the flow field map of the non-dimensional stream function, axial and swirl velocity. 19 refs

  8. Model-supported forward calculation of secondary helium observed by IBEX

    Science.gov (United States)

    Mueller, H. R.; Wood, B. E.

    2017-12-01

    Low-energy secondary neutral helium, created by charge exchange from interstellar helium ions, flows into the inner heliosphere and is part of the neutral helium signal observed by IBEX, the other contributor being primary neutral helium directly from interstellar space. With the help of an accurate, analytic heliospheric neutral test-particle code coupled to a global heliospheric model dominated by hydrogen and protons, the distribution functions and fluxes of secondary helium neutrals are calculated theoretically, from first principles. A general assessment of the characteristics and main sources of secondaries is given, as well as a discussion of their relevance to probe the outer heliosheath.

  9. An approximate framework for quantum transport calculation with model order reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Quan, E-mail: quanchen@eee.hku.hk [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Li, Jun [Department of Chemistry, The University of Hong Kong (Hong Kong); Yam, Chiyung [Beijing Computational Science Research Center (China); Zhang, Yu [Department of Chemistry, The University of Hong Kong (Hong Kong); Wong, Ngai [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Chen, Guanhua [Department of Chemistry, The University of Hong Kong (Hong Kong)

    2015-04-01

    A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.

  10. GoSam 2.0. Automated one loop calculations within and beyond the standard model

    International Nuclear Information System (INIS)

    Greiner, Nicolas; Deutsches Elektronen-Synchrotron

    2014-10-01

    We present GoSam 2.0, a fully automated framework for the generation and evaluation of one loop amplitudes in multi leg processes. The new version offers numerous improvements both on generational aspects as well as on the reduction side. This leads to a faster and more stable code for calculations within and beyond the Standard Model. Furthermore it contains the extended version of the standardized interface to Monte Carlo programs which allows for an easy combination with other existing tools. We briefly describe the conceptual innovations and present some phenomenological results.

  11. Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses

    International Nuclear Information System (INIS)

    Takahashi, K.; Mathews, G.J.; Bloom, S.D.

    1985-01-01

    Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs

  12. Modeling the Electrochemical Hydrogen Oxidation and Evolution Reactions on the Basis of Density Functional Theory Calculations

    DEFF Research Database (Denmark)

    Skulason, Egill; Tripkovic, Vladimir; Björketun, Mårten

    2010-01-01

    Density functional theory calculations have been performed for the three elementary steps―Tafel, Heyrovsky, and Volmer―involved in the hydrogen oxidation reaction (HOR) and its reverse, the hydrogen evolution reaction (HER). For the Pt(111) surface a detailed model consisting of a negatively...... charged Pt(111) slab and solvated protons in up to three water bilayers is considered and reaction energies and activation barriers are determined by using a newly developed computational scheme where the potential can be kept constant during a charge transfer reaction. We determine the rate limiting...

  13. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    Background: Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS......) is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function...

  14. Sensitivity analysis using the FRAPCON-1/EM: development of a calculation model for licensing

    International Nuclear Information System (INIS)

    Chapot, J.L.C.

    1985-01-01

    The FRAPCON-1/EM is version of the FRAPCON-1 code which analyses fuel rods performance under normal operation conditions. This version yields conservative results and is used by the NRC in its licensing activities. A sensitivity analysis was made, to determine the combination of models from the FRAPCON-1/EM which yields the most conservative results for a typical Angra-1 reactor fuel rod. The present analysis showed that this code can be used as a calculation tool for the licensing of the Angra-1 reload. (F.E.) [pt

  15. Aerosol Optical Properties Derived from the DRAGON-NE Asia Campaign, and Implications for a Single-Channel Algorithm to Retrieve Aerosol Optical Depth in Spring from Meteorological Imager (MI) On-Board the Communication, Ocean, and Meteorological Satellite (COMS)

    Science.gov (United States)

    Kim, M.; Kim, J.; Jeong, U.; Kim, W.; Hong, H.; Holben, B.; Eck, T. F.; Lim, J.; Song, C.; Lee, S.; hide

    2016-01-01

    An aerosol model optimized for northeast Asia is updated with the inversion data from the Distributed Regional Aerosol Gridded Observation Networks (DRAGON)-northeast (NE) Asia campaign which was conducted during spring from March to May 2012. This updated aerosol model was then applied to a single visible channel algorithm to retrieve aerosol optical depth (AOD) from a Meteorological Imager (MI) on-board the geostationary meteorological satellite, Communication, Ocean, and Meteorological Satellite (COMS). This model plays an important role in retrieving accurate AOD from a single visible channel measurement. For the single-channel retrieval, sensitivity tests showed that perturbations by 4 % (0.926 +/- 0.04) in the assumed single scattering albedo (SSA) can result in the retrieval error in AOD by over 20 %. Since the measured reflectance at the top of the atmosphere depends on both AOD and SSA, the overestimation of assumed SSA in the aerosol model leads to an underestimation of AOD. Based on the AErosol RObotic NETwork (AERONET) inversion data sets obtained over East Asia before 2011, seasonally analyzed aerosol optical properties (AOPs) were categorized by SSAs at 675 nm of 0.92 +/- 0.035 for spring (March, April, and May). After the DRAGON-NE Asia campaign in 2012, the SSA during spring showed a slight increase to 0.93 +/- 0.035. In terms of the volume size distribution, the mode radius of coarse particles was increased from 2.08 +/- 0.40 to 2.14 +/- 0.40. While the original aerosol model consists of volume size distribution and refractive indices obtained before 2011, the new model is constructed by using a total data set after the DRAGON-NE Asia campaign. The large volume of data in high spatial resolution from this intensive campaign can be used to improve the representative aerosol model for East Asia. Accordingly, the new AOD data sets retrieved from a single-channel algorithm, which uses a precalculated look-up table (LUT) with the new aerosol model, show

  16. Aerosol optical properties derived from the DRAGON-NE Asia campaign, and implications for a single-channel algorithm to retrieve aerosol optical depth in spring from Meteorological Imager (MI on-board the Communication, Ocean, and Meteorological Satellite (COMS

    Directory of Open Access Journals (Sweden)

    M. Kim

    2016-02-01

    Full Text Available An aerosol model optimized for northeast Asia is updated with the inversion data from the Distributed Regional Aerosol Gridded Observation Networks (DRAGON-northeast (NE Asia campaign which was conducted during spring from March to May 2012. This updated aerosol model was then applied to a single visible channel algorithm to retrieve aerosol optical depth (AOD from a Meteorological Imager (MI on-board the geostationary meteorological satellite, Communication, Ocean, and Meteorological Satellite (COMS. This model plays an important role in retrieving accurate AOD from a single visible channel measurement. For the single-channel retrieval, sensitivity tests showed that perturbations by 4 % (0.926 ± 0.04 in the assumed single scattering albedo (SSA can result in the retrieval error in AOD by over 20 %. Since the measured reflectance at the top of the atmosphere depends on both AOD and SSA, the overestimation of assumed SSA in the aerosol model leads to an underestimation of AOD. Based on the AErosol RObotic NETwork (AERONET inversion data sets obtained over East Asia before 2011, seasonally analyzed aerosol optical properties (AOPs were categorized by SSAs at 675 nm of 0.92 ± 0.035 for spring (March, April, and May. After the DRAGON-NE Asia campaign in 2012, the SSA during spring showed a slight increase to 0.93 ± 0.035. In terms of the volume size distribution, the mode radius of coarse particles was increased from 2.08 ± 0.40 to 2.14 ± 0.40. While the original aerosol model consists of volume size distribution and refractive indices obtained before 2011, the new model is constructed by using a total data set after the DRAGON-NE Asia campaign. The large volume of data in high spatial resolution from this intensive campaign can be used to improve the representative aerosol model for East Asia. Accordingly, the new AOD data sets retrieved from a single-channel algorithm, which uses a precalculated look-up table (LUT with the new aerosol model

  17. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    Caracappa, Peter F.; Xu, X. George; Gu, Jianwei

    2011-01-01

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  18. Novel and Efficient Methods for Calculating Pressure in Polymer Lattice Models

    Science.gov (United States)

    Zhang, Pengfei; Wang, Qiang

    2014-03-01

    Pressure calculation in polymer lattice models is an important but nontrivial subject. The three existing methods - thermodynamic integration, repulsive wall, and sedimentation equilibrium methods - all have their limitations and cannot be used to accurately calculate the pressure at all polymer volume fractions φ. Here we propose two novel methods. In the first method, we combine Monte Carlo simulation in an expanded grand-canonical ensemble with the Wang-Landau - Optimized Ensemble (WL-OE) simulation to calculate the pressure as a function of polymer volume fraction, which is very efficient at low to intermediate φ and exhibits negligible finite-size effects. In the second method, we introduce a repulsive plane with bridging bonds, which is similar to the repulsive wall method but eliminates its confinement effects, and estimate the two-dimensional density of states (in terms of the number of bridging bonds and the contact number) using the 1/ t version of Wang-Landau algorithm. This works well at all φ, especially at high φ where all the methods involving chain insertion trial moves fail.

  19. Mixed layer depth calculation in deep convection regions in ocean numerical models

    Science.gov (United States)

    Courtois, Peggy; Hu, Xianmin; Pennelly, Clark; Spence, Paul; Myers, Paul G.

    2017-12-01

    Mixed Layer Depths (MLDs) diagnosed by conventional numerical models are generally based on a density difference with the surface (e.g., 0.01 kg.m-3). However, the temperature-salinity compensation and the lack of vertical resolution contribute to over-estimated MLD, especially in regions of deep convection. In the present work, we examined the diagnostic MLD, associated with the deep convection of the Labrador Sea Water (LSW), calculated with a simple density difference criterion. The over-estimated MLD led us to develop a new tool, based on an observational approach, to recalculate MLD from model output. We used an eddy-permitting, 1/12° regional configuration of the Nucleus for European Modelling of the Ocean (NEMO) to test and discuss our newly defined MLD. We compared our new MLD with that from observations, and we showed a major improvement with our new algorithm. To show the new MLD is not dependent on a single model and its horizontal resolution, we extended our analysis to include 1/4° eddy-permitting simulations, and simulations using the Modular Ocean Model (MOM) model.

  20. Comparison between environmental measurements and model calculations of radioactivity in fish at the Swedish nuclear power plants and Studsvik

    International Nuclear Information System (INIS)

    Karlberg, O.

    1995-02-01

    Doses to critical groups from the activity released from swedish reactors were modelled in 1983. In this report these calculations are compared to doses calculated (using the same assumptions as in the 1983 model) from the activity measured in the water recipient. The study shows that the model overestimates activity in biota and sediments, which was expected, since the model was constructed to be conservative. 13 refs, 5 figs, 6 tabs

  1. Program realization of mathematical model of kinematic calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Calculation of kinematic mechanisms is very time-consuming work. Due to the content of a large number of similar operations can be automated using computers. Forthis purpose, it is necessary to implement a software implementation ofthe mathematical model of calculation of kinematic mechanisms of the second class. In the article on Turbo Pascal presents the text module to library procedures all kinematic studies of planar lever mechanisms of the second class. The determination of the kinematic characteristics of the mechanism and the construction of its provisions, plans, plans, speeds and accelerations carried out on the example of the six-link mechanism. The beginning of the motionless coordinate system coincides with the axis of rotation of the crank AB. It is assumed that the known length of all links, the positions of all additional points of links and the coordinates of all kinematic pairs rack mechanism, i.e. this stage of work to determine the kinematics of the mechanism must be preceded by a stage of synthesis of mechanism (determining missing dimensions of links. Denote the coordinates of point C and considering that the analogues of velocities and accelerations of this point is 0 (stationary point, appeal to the procedure that computes the kinematics group the Assyrians (GA third. Specify kinematic parameters of point D, taking the beginning of the guide slide E at point C, the angle, the analogue of the angular velocity and the analogue of the angular acceleration of the guide is zero, knowing the length of the connecting rod DE and the length of link 5, refer to the procedure for the GA of the second kind. The use of library routines module of the kinematic calculation, makes it relatively simple to organize a simulation of the mechanism motion, to calculate the projection analogues of velocities and accelerations of all links of the mechanism, to build plans of the velocities and accelerations at each position of the mechanism.

  2. Prevalence of refractive errors in the Slovak population calculated using the Gullstrand schematic eye model.

    Science.gov (United States)

    Popov, I; Valašková, J; Štefaničková, J; Krásnik, V

    2017-01-01

    A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.

  3. Calculation of the P-T phase diagram of nitrogen using a mean field model

    Science.gov (United States)

    Enginer, Y.; Algul, G.; Yurtseven, H.

    2017-12-01

    The P-T phase diagram is calculated at low and moderate pressures by obtaining the phase line equations for the transitions considered in nitrogen using the Landau phenomenological model. For some transitions, a quadratic coupling between the order parameters is taken into account in the expansion of free energies in terms of the order parameters. A quadratic function in T and P is fitted to the experimental P-T data from the literature and the fitted parameters are determined. It is shown that the model studied here describes the observed data adequately, which can also be used to predict the thermodynamic properties of the phases of the molecular nitrogen within the temperatures and pressures of the P-T phase diagram of this system.

  4. Model calculations of the age of firn air across the Antarctic continent

    Directory of Open Access Journals (Sweden)

    K. A. Kaspers

    2004-01-01

    Full Text Available The age of firn air in Antarctica at pore close-off depth is only known for a few specific sites where firn air has been sampled for analyses. We present a model that calculates the age of firn air at pore close-off depth for the entire Antarctic continent. The model basically uses four meteorological parameters as input (surface temperature, pressure, accumulation rate and wind speed. Using parameterisations for surface snow density, pore close-off density and tortuosity, in combination with a density-depth model and data of a regional atmospheric climate model, distribution of pore close-off depth for the entire Antarctic continent is determined. The deepest pore close-off depth was found for the East Antarctic Plateau near 72° E, 82° S, at 150±15 m (2σ. A firn air diffusion model was applied to calculate the age of CO2 at pore close-off depth. The results predict that the oldest firn gas (CO2 age is located between Dome Fuji, Dome Argos and Vostok at 43° E, 78° S being 148±23 (1σ or 38 for 2σ years old. At this location an atmospheric trace gas record should be obtained. In this study we show that methyl chloride could be recorded with a predicted length of 125 years as an example for trace gas records at this location. The longest record currently available from firn air is derived at South Pole, being 80 years. Sensitivity tests reveal that the locations with old firn air (East Antarctic Plateau have an estimated uncertainty (2σ for the modelled CO2 age at pore close-off depth of 30% and of about 40% for locations with younger firn air (CO2 age typically 40 years. Comparing the modelled age of CO2 at pore close-off depth with directly determined ages at seven sites yielded a correlation coefficient of 0.90 and a slope close to 1, suggesting a high level of confidence for the modelled results in spite of considerable remaining uncertainties.

  5. Modelling of preheated regenerative chain in Cernavoda NPP using MMS calculation code

    International Nuclear Information System (INIS)

    Bigu, M.; Nita, I.; Prisecaru, I.; Dupleac, D.

    2005-01-01

    Full text: In this work it was studied operation of preheated regenerative chain from NPP Cernavoda. To obtain this analysis coupled analyses of condensate system, water supply system, and drain cooler system were effected. The analysis boundaries are: Upstream: - Steam condensers - Turbine Bleed Steam Downstream: - Steam Generators. The analysis was made in two steps: 1) Getting of hydraulic characteristic of pipe network from steam condensers to steam generators at nominal regime; this step was obtained with hydraulic package called PIPENET. 2) Real thermal hydraulic analyses were done based on hydraulic characteristic of pipe network and supplementary data required for heat transfer calculation in equipment of preheated regenerative chain. Thermal analyses were done using MMS package and refered to normal operating regimes, namely, nominal operating regime required for calibration of calculating model, shutdown regime, start-up regime from zero power hot to nominal power and to abnormal operating regimes, namely, turbine trip, reactor trip and loss of two condensate pumps. The results were compared with already existing analysis and showed the largest differences at interface areas (i.e. 5%). This led us to idea of extending analysis to all secondary circuits in order to reduce the number of boundary conditions which can generate uncertainty in analysis. In this analysis we obtained an advanced model of preheated regenerative chain of secondary circuit in Cernavoda NPP which could be extended up to cover the whole secondary circuit by including the analysis of steam generators, turbine, and steam condenser. (authors)

  6. Modelling of preheated regenerative chain in Cernavoda NPP using MMS calculation code

    International Nuclear Information System (INIS)

    Bigu, M.; Nita, I.; Prisecaru, I.; Dupleac, D.

    2005-01-01

    In this work it was studied operation of preheated regenerative chain from NPP Cernavoda. To obtain this analysis coupled analyses of condensate system, water supply system, and drain cooler system were effected. The analysis boundaries are: Upstream: - Steam condensers - Turbine Bleed Steam Downstream: - Steam Generators. The analysis was made in two steps: 1) Getting of hydraulic characteristic of pipe network from steam condensers to steam generators at nominal regime; this step was obtained with hydraulic package called PIPENET. 2) Real thermal hydraulic analyses were done based on hydraulic characteristic of pipe network and supplementary data required for heat transfer calculation in equipment of preheated regenerative chain. Thermal analyses were done using MMS package and referred to normal operating regimes, namely, nominal operating regime required for calibration of calculating model, shutdown regime, start-up regime from zero power hot to nominal power and to abnormal operating regimes, namely, turbine trip, reactor trip and loss of two condensate pumps. The results were compared with already existing analysis and showed the largest differences at interface areas (i.e. 5%). This led US to idea of extending analysis to all secondary circuits in order to reduce the number of boundary conditions which can generate uncertainty in analysis. In this analysis we obtained an advanced model of preheated regenerative chain of secondary circuit in Cernavoda NPP which could be extended up to cover the whole secondary circuit by including the analysis of steam generators, turbine, and steam condenser. (authors)

  7. Bus Operation Monitoring Oriented Public Transit Travel Index System and Calculation Models

    Directory of Open Access Journals (Sweden)

    Jiancheng Weng

    2013-01-01

    Full Text Available This study proposed a two-dimensional index system which is concerned essentially with urban travel based on travel modes and user satisfaction. First, the public transit was taken as an example to describe the index system establishing process. In consideration of convenience, rapid, reliability, comfort, and safety, a bus service evaluation index system was established. The indicators include the N-minute coverage of bus stops, average travel speed, and fluctuation of travel time between stops and bus load factor which could intuitively describe the characteristics of public transport selected to calculate bus travel indexes. Then, combined with the basic indicators, the calculation models of Convenience Index (CI, Rapid Index (RI, Reliability Index (RBI, and Comfort Index (CTI were established based on the multisource data of public transit including the real-time bus GPS data and passenger IC card data. Finally, a case study of Beijing bus operation evaluation and analysis was conducted by taking real bus operation data including GPS data and passenger transaction recorder (IC card data. The results showed that the operation condition of the public transit was well reflected and scientifically classified by the bus travel index models.

  8. Critical comparison of electrode models in density functional theory based quantum transport calculations.

    Science.gov (United States)

    Jacob, D; Palacios, J J

    2011-01-28

    We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.

  9. REVIEW OF ADVANCES IN COBB ANGLE CALCULATION AND IMAGE-BASED MODELLING TECHNIQUES FOR SPINAL DEFORMITIES

    Directory of Open Access Journals (Sweden)

    V. Giannoglou

    2016-06-01

    Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  10. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom.

    Science.gov (United States)

    Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M

    2014-02-01

    To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model

  11. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    International Nuclear Information System (INIS)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.

    2014-01-01

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with 125 I, 103 Pd, or 131 Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up

  12. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)

    2014-02-15

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model

  13. Analysis of turbulence models for thermohydraulic calculations of helium cooled fusion reactor components

    Energy Technology Data Exchange (ETDEWEB)

    Arbeiter, F. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Gordeev, S. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)]. E-mail: gordeev@irs.fzk.de; Heinzel, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Slobodtchouk, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)

    2006-02-15

    The aim of the present work is to choose an optimal use of CFD codes for thermohydraulic calculation of the helium cooled fusion reactor components, such as divertor module, test blanket module and International Fusion Materials Irradiation Facility (IFMIF) test modules. In spite of common features (intense heat flux, nuclear heating of the structure, helium-cooling), all these components have different boundary conditions, such as helium temperature, pressure and heating rate and different geometries. It is the reason for the appearance of some effects in the flow influencing significantly the heat transfer. A number of turbulence models offered by the commercial STAR-CD code were tested on the experiments carried out in the Forschungszentrum Karlsruhe (FZK) and on the experimental data from the scientific publications. Results of different turbulence models are compared and analysed. For geometrically simple channel flows with significant gas property variation low-Re number k-{epsilon} models with damping functions give more accurate results and are more appropriate for the conditions of the IFMIF HFTM. The heat transfer in regions with flow impingement is well predicted by turbulence models, which include different limiters in the turbulence production. Most reliable turbulence models were chosen for the thermohydraulic analysis.

  14. Non-LTE model calculations for SN 1987A and the extragalactic distance scale

    Science.gov (United States)

    Schmutz, W.; Abbott, D. C.; Russell, R. S.; Hamann, W.-R.; Wessolowski, U.

    1990-01-01

    This paper presents model atmospheres for the first week of SN 1987A, based on the luminosity and density/velocity structure from hydrodynamic models of Woosley (1988). The models account for line blanketing, expansion, sphericity, and departures from LTE in hydrogen and helium and differ from previously published efforts because they represent ab initio calculations, i.e., they contain essentially no free parameters. The formation of the UV spectrum is dominated by the effects of line blanketing. In the absorption troughs, the Balmer line profiles were fit well by these models, but the observed emissions are significantly stronger than predicted, perhaps due to clumping. The generally good agreement between the present synthetic spectra and observations provides independent support for the overall accuracy of the hydrodynamic models of Woosley. The question of the accuracy of the Baade-Wesselink method is addressed in a detailed discussion of its approximations. While the application of the standard method produces a distance within an uncertainty of 20 percent in the case of SN 1987A, systematic errors up to a factor of 2 are possible, particularly if the precursor was a red supergiant.

  15. Radiative forcing from aircraft emissions of NOx: model calculations with CH4 surface flux boundary condition

    Directory of Open Access Journals (Sweden)

    Giovanni Pitari

    2017-12-01

    Full Text Available Two independent chemistry-transport models with troposphere-stratosphere coupling are used to quantify the different components of the radiative forcing (RF from aircraft emissions of NOx, i.e., the University of L'Aquila climate-chemistry model (ULAQ-CCM and the University of Oslo chemistry-transport model (Oslo-CTM3. The tropospheric NOx enhancement due to aircraft emissions produces a short-term O3 increase with a positive RF (+17.3 mW/m2 (as an average value of the two models. This is partly compensated by the CH4 decrease due to the OH enhancement (−9.4 mW/m2. The latter is a long-term response calculated using a surface CH4 flux boundary condition (FBC, with at least 50 years needed for the atmospheric CH4 to reach steady state. The radiative balance is also affected by the decreasing amount of CO2 produced at the end of the CH4 oxidation chain: an average CO2 accumulation change of −2.2 ppbv/yr is calculated on a 50 year time horizon (−1.6 mW/m2. The aviation perturbed amount of CH4 induces a long-term response of tropospheric O3 mostly due to less HO2 and CH3O2 being available for O3 production, compared with the reference case where a constant CH4 surface mixing ratio boundary condition is used (MBC (−3.9 mW/m2. The CH4 decrease induces a long-term response of stratospheric H2O (−1.4 mW/m2. The latter finally perturbs HOx and NOx in the stratosphere, with a more efficient NOx cycle for mid-stratospheric O3 depletion and a decreased O3 production from HO2+NO in the lower stratosphere. This produces a long-term stratospheric O3 loss, with a negative RF (−1.2 mW/m2, compared with the CH4 MBC case. Other contributions to the net NOx RF are those due to NO2 absorption of UV-A and aerosol perturbations (the latter calculated only in the ULAQ-CCM. These comprise: increasing sulfate due to more efficient oxidation of SO2, increasing inorganic and organic nitrates and the net aerosols indirect effect on warm clouds

  16. Calculational model used in the analysis of nuclear performance of the Light Water Breeder Reactor (LWBR) (LWBR Development Program)

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.B. (ed.)

    1978-08-01

    The calculational model used in the analysis of LWBR nuclear performance is described. The model was used to analyze the as-built core and predict core nuclear performance prior to core operation. The qualification of the nuclear model using experiments and calculational standards is described. Features of the model include: an automated system of processing manufacturing data; an extensively analyzed nuclear data library; an accurate resonance integral calculation; space-energy corrections to infinite medium cross sections; an explicit three-dimensional diffusion-depletion calculation; a transport calculation for high energy neutrons; explicit accounting for fuel and moderator temperature feedback, clad diameter shrinkage, and fuel pellet growth; and an extensive testing program against experiments and a highly developed analytical standard.

  17. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  18. Detection of Paroxysms in Long-Term, Single-Channel EEG-Monitoring of Patients with Typical Absence Seizures

    Science.gov (United States)

    Kjaer, Troels W.; Sorensen, Helge B. D.; Groenborg, Sabine; Pedersen, Charlotte R.

    2017-01-01

    Absence seizures are associated with generalized 2.5–5 Hz spike-wave discharges in the electroencephalogram (EEG). Rarely are patients, parents, or physicians aware of the duration or incidence of seizures. Six patients were monitored with a portable EEG-device over four times 24 h to evaluate how easily outpatients are monitored and how well an automatic seizure detection algorithm can identify the absences. Based on patient-specific modeling, we achieved a sensitivity of 98.4% with only 0.23 false detections per hour. This yields a clinically satisfying performance with a positive predictive value of 87.1%. Portable EEG-recorders identifying paroxystic events in epilepsy outpatients are a promising tool for patients and physicians dealing with absence epilepsy. Albeit the small size of the EEG-device, some children still complained about the obtrusive nature of the device. We aim at developing less obtrusive though still very efficient devices, e.g., hidden in the ear canal or below the skin. PMID:29018634

  19. Revision of the documentation for a model for calculating effects of liquid waste disposal in deep saline aquifers

    Science.gov (United States)

    INTERA Environmental Consultants, Inc.

    1979-01-01

    The model developed under this contract is a modified version of the deep well disposal model developed by INTERCOMP Resource Development and Engineering, Inc., for the U.S. Geological Survey (A model for calculating effects of liquid waste disposal in deep saline aquifers). The model is a finite-difference numerical solution of the partial differential equations describing

  20. Sensitivity analysis on a dose-calculation model for the terrestrial food-chain pathway

    International Nuclear Information System (INIS)

    Abdel-Aal, M.M.

    1994-01-01

    Parameter uncertainty and sensitivity were applied to the U.S. Regulatory Commission's (NRC) Regulatory Guide 1.109 (1977) models for calculating the ingestion dose via a terrestrial food-chain pathway in order to assess the transport of chronically released, low-level effluents from light-water reactors. In the analysis, we used the generation of latin hypercube samples (LHS) and employed a constrained sampling scheme. The generation of these samples is based on information supplied to the LHS program for variables or parameters. The actually sampled values are used to form vectors of variables that are commonly used as inputs to computer models for the purpose of sensitivity and uncertainty analysis. Regulatory models consider the concentrations of radionuclides that are deposited on plant tissues or lead to root uptake of nuclides initially deposited on soil. We also consider concentrations in milk and beef as a consequence of grazing on contaminated pasture or ingestion of contaminated feed by dairy and beef cattle. The radionuclides Sr-90 and Cs-137 were selected for evaluation. The most sensitive input parameters for the model were the ground-dispersion parameter, release rates of radionuclides, and soil-to-plant transfer coefficients of radionuclides. (Author)

  1. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Science.gov (United States)

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  2. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Directory of Open Access Journals (Sweden)

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  3. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  4. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  5. Development of an atmospheric diffusion numerical model for a nuclear facility. Numerical calculation method incorporating building effects

    International Nuclear Information System (INIS)

    Sada, Koichi; Michioka, Takenobu; Ichikawa, Yoichi

    2002-01-01

    Because effluent gas is sometimes released from low positions, viz., near the ground surface and around buildings, the effects caused by buildings within the site area are not negligible for gas diffusion predictions. For these reasons, the effects caused by buildings for gas diffusion are considered under the terrain following calculation coordinate system in this report. Numerical calculation meshes on the ground surface are treated as the building with the adaptation of wall function techniques of turbulent quantities in the flow calculations using a turbulence closure model. The reflection conditions of released particles on building surfaces are taken into consideration in the diffusion calculation using the Lagrangian particle model. Obtained flow and diffusion calculation results are compared with those of wind tunnel experiments around the building. It was apparent that features observed in a wind tunnel, viz., the formation of cavity regions behind the building and the gas diffusion to the ground surface behind the building, are also obtained by numerical calculation. (author)

  6. Calculation and Error Analysis of a Digital Elevation Model of Hofsjokull, Iceland from SAR Interferometry

    Science.gov (United States)

    Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.

    1999-01-01

    Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.

  7. Model calculation of positron states in tungsten containing hydrogen and helium

    International Nuclear Information System (INIS)

    Troev, T; Nankov, N; Yoshiie, T; Popov, E

    2010-01-01

    Tungsten is a candidate material for plasma-facing first wall of a fusion power plant. Understanding of defects, tritium and helium behaviour in plasma facing materials [PFM] is an important issue for fusion reactor from viewpoints of its mechanical properties under neutron irradiation. Experiments with high-Z materials show that erosion of these materials under normal operation condition is considerably lower than the plasma induced erosion of low-Z materials like carbon or beryllium. Quantitative understanding of the experimental results for defects in tungsten needs a comprehensive theory of electron-positron interaction. The properties of defects in tungsten containing hydrogen or helium atoms have been investigated by model positron lifetime quantum-mechanical calculations. The electron wave functions have been obtained in the local density approximation LDA to the density functional theory DFT. On the bases of calculated results, the behaviour of vacancies, empty nano-voids and nano-voids with hydrogen and helium were discussed. It was established that hydrogen and helium in larger three-dimensional vacancy clusters in W change the annihilation characteristics dramatically. The hydrogen and helium atoms are trapped by lattice vacancies. These results provide physical insight for positron interactions in tungsten defects and can be used for prediction of hydrogen-H or helium-He4 and (tritium-H3) generation for the design of fusion reactors.

  8. Current-Transport Mechanisms in the AlInN/AlN/GaN single-channel and AlInN/AlN/GaN/AlN/GaN double-channel heterostructures

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Engin, E-mail: engina@bilkent.edu.tr [Nanotechnology Research Center, Department of Physics, Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, 06800 Ankara (Turkey); Turan, Sevil; Gökden, Sibel; Teke, Ali [Department of Physics, Faculty of Science and Letters, Balıkesir University, Çağış Kampüsü, 10145 Balıkesir (Turkey); Özbay, Ekmel [Nanotechnology Research Center, Department of Physics, Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, 06800 Ankara (Turkey)

    2013-12-02

    Current-transport mechanisms were investigated in Schottky contacts on AlInN/AlN/GaN single channel (SC) and AlInN/AlN/GaN/AlN/GaN double channel (DC) heterostructures. A simple model was adapted to the current-transport mechanisms in DC heterostructure. In this model, two Schottky diodes are in series: one is a metal–semiconductor barrier layer (AIInN) Schottky diode and the other is an equivalent Schottky diode, which is due to the heterojunction between the AlN and GaN layer. Capacitance–voltage studies show the formation of a two-dimensional electron gas at the AlN/GaN interface in the SC and the first AlN/GaN interface from the substrate direction in the DC. In order to determine the current mechanisms for SC and DC heterostructures, we fit the analytical expressions given for the tunneling current to the experimental current–voltage data over a wide range of applied biases as well as at different temperatures. We observed a weak temperature dependence of the saturation current and a fairly small dependence on the temperature of the tunneling parameters in this temperature range. At both a low and medium forward-bias voltage values for Schottky contacts on AlInN/AlN/GaN/AlN/GaN DC and AlInN/AlN/GaN SC heterostructures, the data are consistent with electron tunneling to deep levels in the vicinity of mixed/screw dislocations in the temperature range of 80–420 K. - Highlights: • Current mechanisms were investigated on single and double channel heterostructures. • A model was adapted to the current mechanisms in double channel heterostructures. • We observed a weak temperature dependence of the saturation current. • And a small dependence of the tunneling parameters in this temperature range.

  9. Current-Transport Mechanisms in the AlInN/AlN/GaN single-channel and AlInN/AlN/GaN/AlN/GaN double-channel heterostructures

    International Nuclear Information System (INIS)

    Arslan, Engin; Turan, Sevil; Gökden, Sibel; Teke, Ali; Özbay, Ekmel

    2013-01-01

    Current-transport mechanisms were investigated in Schottky contacts on AlInN/AlN/GaN single channel (SC) and AlInN/AlN/GaN/AlN/GaN double channel (DC) heterostructures. A simple model was adapted to the current-transport mechanisms in DC heterostructure. In this model, two Schottky diodes are in series: one is a metal–semiconductor barrier layer (AIInN) Schottky diode and the other is an equivalent Schottky diode, which is due to the heterojunction between the AlN and GaN layer. Capacitance–voltage studies show the formation of a two-dimensional electron gas at the AlN/GaN interface in the SC and the first AlN/GaN interface from the substrate direction in the DC. In order to determine the current mechanisms for SC and DC heterostructures, we fit the analytical expressions given for the tunneling current to the experimental current–voltage data over a wide range of applied biases as well as at different temperatures. We observed a weak temperature dependence of the saturation current and a fairly small dependence on the temperature of the tunneling parameters in this temperature range. At both a low and medium forward-bias voltage values for Schottky contacts on AlInN/AlN/GaN/AlN/GaN DC and AlInN/AlN/GaN SC heterostructures, the data are consistent with electron tunneling to deep levels in the vicinity of mixed/screw dislocations in the temperature range of 80–420 K. - Highlights: • Current mechanisms were investigated on single and double channel heterostructures. • A model was adapted to the current mechanisms in double channel heterostructures. • We observed a weak temperature dependence of the saturation current. • And a small dependence of the tunneling parameters in this temperature range

  10. The model for calculation of emission and imisson of air pollutants from vehicles with internal combustion engine

    International Nuclear Information System (INIS)

    Tashevski, Done; Dimitrovski, Mile

    1994-01-01

    The model for calculation of emission and immision of air pollutants from vehicles with internal combustion engine on the crossroads in urban environments, with substitution of a great number of exhaust-pipes with one chimney in the centre of the crossroad has been made. The whole calculation of the pollution sources mentioned above is, in the fact, the calculation of the emission and imisson of pollutants from point sources of pollution. (author)

  11. Generic models of deep formation water calculated with PHREEQC using the "gebo"-database

    Science.gov (United States)

    Bozau, E.; van Berk, W.

    2012-04-01

    To identify processes during the use of formation waters for geothermal energy production an extended hydrogeochemical thermodynamic database (named "gebo"-database) for the well known and commonly used software PHREEQC has been developed by collecting and inserting data from literature. The following solution master species: Fe(+2), Fe(+3), S(-2), C(-4), Si, Zn, Pb, and Al are added to the database "pitzer.dat" which is provided with the code PHREEQC. According to the solution master species the necessary solution species and phases (solid phases and gases) are implemented. Furthermore, temperature and pressure adaptations of the mass action law constants, Pitzer parameters for the calculation of activity coefficients in waters of high ionic strength and solubility equilibria among gaseous and aqueous species of CO2, methane, and hydrogen sulphide are implemented into the "gebo"-database. Combined with the "gebo"-database the code PHREEQC can be used to test the behaviour of highly concentrated solutions (e.g. formation waters, brines). Chemical changes caused by temperature and pressure gradients as well as the exposure of the water to the atmosphere and technical equipments can be modelled. To check the plausibility of additional and adapted data/parameters experimental solubility data from literature (e.g. sulfate and carbonate minerals) are compared to modelled mineral solubilities at elevated levels of Total Dissolved Solids (TDS), temperature, and pressure. First results show good matches between modelled and experimental mineral solubility for barite, celestite, anhydrite, and calcite in high TDS waters indicating the plausibility of additional and adapted data and parameters. Furthermore, chemical parameters of geothermal wells in the North German Basin are used to test the "gebo"-database. The analysed water composition (starting with the main cations and anions) is calculated by thermodynamic equilibrium reactions of pure water with the minerals found in

  12. Cooling load calculation by the radiant time series method - effect of solar radiation models

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alexandre M.S. [Universidade Estadual de Maringa (UEM), PR (Brazil)], E-mail: amscosta@uem.br

    2010-07-01

    In this work was analyzed numerically the effect of three different models for solar radiation on the cooling load calculated by the radiant time series' method. The solar radiation models implemented were clear sky, isotropic sky and anisotropic sky. The radiant time series' method (RTS) was proposed by ASHRAE (2001) for replacing the classical methods of cooling load calculation, such as TETD/TA. The method is based on computing the effect of space thermal energy storage on the instantaneous cooling load. The computing is carried out by splitting the heat gain components in convective and radiant parts. Following the radiant part is transformed using time series, which coefficients are a function of the construction type and heat gain (solar or non-solar). The transformed result is added to the convective part, giving the instantaneous cooling load. The method was applied for investigate the influence for an example room. The location used was - 23 degree S and 51 degree W and the day was 21 of January, a typical summer day in the southern hemisphere. The room was composed of two vertical walls with windows exposed to outdoors with azimuth angles equals to west and east directions. The output of the different models of solar radiation for the two walls in terms of direct and diffuse components as well heat gains were investigated. It was verified that the clear sky exhibited the less conservative (higher values) for the direct component of solar radiation, with the opposite trend for the diffuse component. For the heat gain, the clear sky gives the higher values, three times higher for the peek hours than the other models. Both isotropic and anisotropic models predicted similar magnitude for the heat gain. The same behavior was also verified for the cooling load. The effect of room thermal inertia was decreasing the cooling load during the peak hours. On the other hand the higher thermal inertia values are the greater for the non peak hours. The effect

  13. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  14. Neutronics model of the bulk shielding reactor (BSR): validation by comparison of calculations with the experimental measurements

    International Nuclear Information System (INIS)

    Johnson, J.O.; Miller, L.F.; Kam, F.B.K.

    1981-05-01

    A neutronics model for the Oak Ridge National Laboratory Bulk Shielding Reactor (ORNL-SAR) was developed and verified by experimental measurements. A cross-section library was generated from the 218 group Master Library using the AMPX Block Code system. A series of one-, two-, and three-dimensional neutronics calculations were performed utilizing both transport and diffusion theory. Spectral comparison was made with 58 Ni(n,p) reaction. The results of the comparison between the calculational model and other experimental measurements showed agreement within 10% and therefore the model was determined to be adequate for calculating the neutron fluence for future irradiation experiments in the ORNL-BSR

  15. An Exploration of Wind Stress Calculation Techniques in Hurricane Storm Surge Modeling

    Directory of Open Access Journals (Sweden)

    Kyra M. Bryant

    2016-09-01

    Full Text Available As hurricanes continue to threaten coastal communities, accurate storm surge forecasting remains a global priority. Achieving a reliable storm surge prediction necessitates accurate hurricane intensity and wind field information. The wind field must be converted to wind stress, which represents the air-sea momentum flux component required in storm surge and other oceanic models. This conversion requires a multiplicative drag coefficient for the air density and wind speed to represent the air-sea momentum exchange at a given location. Air density is a known parameter and wind speed is a forecasted variable, whereas the drag coefficient is calculated using an empirical correlation. The correlation’s accuracy has brewed a controversy of its own for more than half a century. This review paper examines the lineage of drag coefficient correlations and their acceptance among scientists.

  16. Optical model calculation for the unresolved/resolved resonance region of Fe-56

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Froehner, F.H.

    1997-03-01

    We have studied optical model fits to total neutron cross sections of structural materials using the accurate data base for {sup 56}Fe existing in the resolved and unresolved resonance region. Averages over resolved resonances were calculated with Lorentzian weighting in Reich-Moore (reduced R matrix) approximation. Starting from the best available optical potentials we found that adjustment of the real and imaginary well depths does not work satisfactorily with the conventional weak linear energy dependence of the well depths. If, however, the linear dependences are modified towards low energies, the average total cross sections can be fitted quite well, from the resolved resonance region up to 20 MeV and higher. (author)

  17. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  18. New methodologies for calculation of flight parameters on reduced scale wings models in wind tunnel =

    Science.gov (United States)

    Ben Mosbah, Abdallah

    In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their

  19. Electromagnetic field modeling and ion optics calculations for a continuous-flow AMS system

    Science.gov (United States)

    Han, B. X.; von Reden, K. F.; Roberts, M. L.; Schneider, R. J.; Hayes, J. M.; Jenkins, W. J.

    2007-06-01

    A continuous-flow 14C AMS (CFAMS) system is under construction at the NOSAMS facility. This system is based on a NEC Model 1.5SDH-1 0.5 MV Pelletron accelerator and will utilize a combination of a microwave ion source (MIS) and a charge exchange canal (CXC) to produce negative carbon ions from a continuously flowing stream of CO2 gas. For high-efficiency transmission of the large emittance, large energy-spread beam from the ion source unit, a large-acceptance and energy-achromatic injector consisting of a 45° electrostatic spherical analyzer (ESA) and a 90° double-focusing magnet has been designed. The 45° ESA is rotatable to accommodate a 134-sample MC-SNICS as a second ion source. The high-energy achromat (90° double focusing magnet and 90° ESA) has also been customized for large acceptance. Electromagnetic field modeling and ion optics calculations of the beamline were done with Infolytica MagNet, ElecNet, and Trajectory Evaluator. PBGUNS and SIMION were used for the modeling of ion source unit.

  20. Mathematical model of whole-process calculation for bottom-blowing copper smelting

    Science.gov (United States)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song

    2017-11-01

    The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.