Energy Technology Data Exchange (ETDEWEB)
Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G., E-mail: ansar.calloo@cea.fr, E-mail: jean-francois.vidal@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: gerald.rimpault@cea.fr [CEA, DEN, DER/SPRC/LEPh, Saint-Paul-lez-Durance (France)
2011-07-01
This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S{sub n} method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)
International Nuclear Information System (INIS)
Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G.
2011-01-01
This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S_n method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)
Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.
2016-02-01
Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.
Patil, Vishal; Liburdy, James
2012-11-01
Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.
International Nuclear Information System (INIS)
Barraclough, B; Li, J; Liu, C; Yan, G
2014-01-01
Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle 3 ), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA
Energy Technology Data Exchange (ETDEWEB)
Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to
Cosmological measure with volume averaging and the vacuum energy problem
Astashenok, Artyom V.; del Popolo, Antonino
2012-04-01
In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.
Cosmological measure with volume averaging and the vacuum energy problem
International Nuclear Information System (INIS)
Astashenok, Artyom V; Del Popolo, Antonino
2012-01-01
In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)
Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.
Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D
2018-04-19
The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.
Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico
Energy Technology Data Exchange (ETDEWEB)
Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx
2008-07-01
This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
Measurement of average density and relative volumes in a dispersed two-phase fluid
Sreepada, Sastry R.; Rippel, Robert R.
1992-01-01
An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.
The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel
Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin
2016-06-01
On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.
Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging
International Nuclear Information System (INIS)
Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.
1993-01-01
Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.
2017-05-01
Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.
A Single Image Dehazing Method Using Average Saturation Prior
Directory of Open Access Journals (Sweden)
Zhenfei Gu
2017-01-01
Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.
International Nuclear Information System (INIS)
Whitcher, Ralph
2007-01-01
1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Energy Technology Data Exchange (ETDEWEB)
Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes
International Nuclear Information System (INIS)
Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.
1977-01-01
One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated
International Nuclear Information System (INIS)
Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu
2010-01-01
The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)
Average methods and their applications in Differential Geometry I
Vincze, Csaba
2013-01-01
In Minkowski geometry the metric features are based on a compact convex body containing the origin in its interior. This body works as a unit ball with its boundary formed by the unit vectors. Using one-homogeneous extension we have a so-called Minkowski functional to measure the lenght of vectors. The half of its square is called the energy function. Under some regularity conditions we can introduce an average Euclidean inner product by integrating the Hessian matrix of the energy function o...
Deriving average soliton equations with a perturbative method
International Nuclear Information System (INIS)
Ballantyne, G.J.; Gough, P.T.; Taylor, D.P.
1995-01-01
The method of multiple scales is applied to periodically amplified, lossy media described by either the nonlinear Schroedinger (NLS) equation or the Korteweg--de Vries (KdV) equation. An existing result for the NLS equation, derived in the context of nonlinear optical communications, is confirmed. The method is then applied to the KdV equation and the result is confirmed numerically
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
International Nuclear Information System (INIS)
Espinosa-Paredes, Gilberto
2010-01-01
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.
Cambridge Conference on School Mathematics, Newton, MA.
Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…
Directory of Open Access Journals (Sweden)
Peihua Wang
Full Text Available After the implementation of the universal salt iodization (USI program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD among children in eastern China.This study aimed to examine the correlation of total goiter rate (TGR with average thyroid volume (Tvol and urinary iodine concentration (UIC in Jiangsu province after IDD elimination.Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects and salt iodine (10,184 samples using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA.TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR 300 μg/L was associated with a smaller average Tvol in children.After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.
The disk averaged star formation relation for Local Volume dwarf galaxies
López-Sánchez, Á. R.; Lagos, C. D. P.; Young, T.; Jerjen, H.
2018-05-01
Spatially resolved H I studies of dwarf galaxies have provided a wealth of precision data. However these high-quality, resolved observations are only possible for handful of dwarf galaxies in the Local Volume. Future H I surveys are unlikely to improve the current situation. We therefore explore a method for estimating the surface density of the atomic gas from global H I parameters, which are conversely widely available. We perform empirical tests using galaxies with resolved H I maps, and find that our approximation produces values for the surface density of atomic hydrogen within typically 0.5 dex of the true value. We apply this method to a sample of 147 galaxies drawn from modern near-infrared stellar photometric surveys. With this sample we confirm a strict correlation between the atomic gas surface density and the star formation rate surface density, that is vertically offset from the Kennicutt-Schmidt relation by a factor of 10 - 30, and significantly steeper than the classical N = 1.4 of Kennicutt (1998). We further infer the molecular fraction in the sample of low surface brightness, predominantly dwarf galaxies by assuming that the star formation relationship with molecular gas observed for spiral galaxies also holds in these galaxies, finding a molecular-to-atomic gas mass fraction within the range of 5-15%. Comparison of the data to available models shows that a model in which the thermal pressure balances the vertical gravitational field captures better the shape of the ΣSFR-Σgas relationship. However, such models fail to reproduce the data completely, suggesting that thermal pressure plays an important role in the disks of dwarf galaxies.
Colorado Conference on iterative methods. Volume 1
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-12-31
The conference provided a forum on many aspects of iterative methods. Volume I topics were:Session: domain decomposition, nonlinear problems, integral equations and inverse problems, eigenvalue problems, iterative software kernels. Volume II presents nonsymmetric solvers, parallel computation, theory of iterative methods, software and programming environment, ODE solvers, multigrid and multilevel methods, applications, robust iterative methods, preconditioners, Toeplitz and circulation solvers, and saddle point problems. Individual papers are indexed separately on the EDB.
Grading of mitral regurgitation in mitral valve prolapse using the average pixel intensity method.
Kamoen, Victor; El Haddad, Milad; De Buyzere, Marc; De Backer, Tine; Timmermans, Frank
2018-05-01
We recently reported the feasibility of the average pixel intensity (API) method for grading mitral regurgitation (MR) in a heterogeneous MR population. Since mitral valve prolapse (MVP) is an important cause of primary MR, we more specifically investigated the feasibility of the API method and the MR flow dynamics in patients with MVP. Transthoracic echocardiography was performed by a single operator in consecutive MVP patients (n=112). MR was assessed using the API method, color Doppler, vena contracta width (VCW), effective regurgitant orifice area (PISA-EROA) and regurgitant volume (PISA-RV). The API method was feasible in 89% of all MVP patients (68%, 71% for VCW and PISA method, respectively ;pMVP with non-holosystolic MR were 0.989 and 0.995. For the overall MVP-MR population, API had significant correlations with direct and indirect measures of MR severity. Based on ROC curves, an API cutoff value of 125 au was suggested to identify severe MR in MVP and a MR duration/systolic time ratioMVP-MR) identifies patients with non-severe MR (APIMVP had severe MR (API>125). Finally, API analysis of the proto-, mid- and telesystolic phases of MR in MVP showed different kinetics in non-holosystolic compared to holosystolic MVP. The API method is a feasible and reproducible method for grading MVP-MR. As the API method takes into account the temporal MR flow changes during the entire systolic cycle, it may be of added value in clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.
Directory of Open Access Journals (Sweden)
Min Hye Jang
Full Text Available In spite of the usefulness of the Ki-67 labeling index (LI as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67 between the two methods and the ratio of the Ki-67 LIs (H/A ratio of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700. In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.
Method and apparatus for imaging volume data
International Nuclear Information System (INIS)
Drebin, R.; Carpenter, L.C.
1987-01-01
An imaging system projects a two dimensional representation of three dimensional volumes where surface boundaries and objects internal to the volumes are readily shown, and hidden surfaces and the surface boundaries themselves are accurately rendered by determining volume elements or voxels. An image volume representing a volume object or data structure is written into memory. A color and opacity is assigned to each voxel within the volume and stored as a red (R), green (G), blue (B), and opacity (A) component, three dimensional data volume. The RGBA assignment for each voxel is determined based on the percentage component composition of the materials represented in the volume, and thus, the percentage of color and transparency associated with those materials. The voxels in the RGBA volume are used as mathematical filters such that each successive voxel filter is overlayed over a prior background voxel filter. Through a linear interpolation, a new background filter is determined and generated. The interpolation is successively performed for all voxels up to the front most voxel for the plane of view. The method is repeated until all display voxels are determined for the plane of view. (author)
Directory of Open Access Journals (Sweden)
Yu-Chia Chang
2008-01-01
Full Text Available Three cruises with shipboard Acoustic Doppler Current Profiler (ADCP were performed along a transect across the Peng-hu Channel (PHC in the Taiwan Strait during 2003 - 2004 in order to investigate the feasibility and accuracy of the phase-averaging method to eliminate tidal components from shipboard ADCP measurement of currents. In each cruise measurement was repeated a number of times along the transect with a specified time lag of either 5, 6.21, or 8 hr, and the repeated data at the same location were averaged to eliminate the tidal currents; this is the so-called ¡§phase-averaging method¡¨. We employed 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods in this study. The residual currents and volume transport of the PHC derived from various phase-averaging methods were intercompared and were also compared with results of the least-square harmonic reduction method proposed by Simpson et al. (1990 and the least-square interpolation method using Gaussian function (Wang et al. 2004. The estimated uncertainty of the residual flow through the PHC derived from the 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods is 0.3, 0.3, 1.3, and 4.6 cm s-1, respectively. Procedures for choosing a best phase average method to remove tidal currents in any particular region are also suggested.
Review of the different methods to derive average spacing from resolved resonance parameters sets
International Nuclear Information System (INIS)
Fort, E.; Derrien, H.; Lafond, D.
1979-12-01
The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested
International Nuclear Information System (INIS)
Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun
2015-01-01
Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy
Nonlinear Conservation Laws and Finite Volume Methods
Leveque, Randall J.
Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References
International Nuclear Information System (INIS)
Wang Dalun; Li Benci; Wang Xiuchun; Li Yijun; Zhang Shaohua; He Yongwu
1991-07-01
The average fission fraction of 238 U caused by 14 MeV neutrons in assemblies with large volume depleted uranium has been determined. The measured value of p f 238U (R ∞ depleted ) 14 was 0.897 ± 0.036. Measurements were also completed for neutron flux distribution and average fission fraction of 235 U isotope in depleted uranium sphere. Values of p f 238U (R depleted ) have been obtained by using a series of uranium spheres. For a sphere with Φ 600 the p f 23 '8 U (R 300 depleted ) is 0.823 ± 0.041, the density of depleted uranium assembly is 18.8g/cm 3 and total weight of assembly is about 2.8t
Solving hyperbolic equations with finite volume methods
Vázquez-Cendón, M Elena
2015-01-01
Finite volume methods are used in numerous applications and by a broad multidisciplinary scientific community. The book communicates this important tool to students, researchers in training and academics involved in the training of students in different science and technology fields. The selection of content is based on the author’s experience giving PhD and master courses in different universities. In the book the introduction of new concepts and numerical methods go together with simple exercises, examples and applications that contribute to reinforce them. In addition, some of them involve the execution of MATLAB codes. The author promotes an understanding of common terminology with a balance between mathematical rigor and physical intuition that characterizes the origin of the methods. This book aims to be a first contact with finite volume methods. Once readers have studied it, they will be able to follow more specific bibliographical references and use commercial programs or open source software withi...
International Nuclear Information System (INIS)
Levenshtam, V B
2006-01-01
We justify the averaging method for abstract parabolic equations with stationary principal part that contain non-linearities (subordinate to the principal part) some of whose terms are rapidly oscillating in time with zero mean and are proportional to the square root of the frequency of oscillation. Our interest in the exponent 1/2 is motivated by the fact that terms proportional to lower powers of the frequency have no influence on the average. For linear equations of the same type, we justify an algorithm for the study of the stability of solutions in the case when the stationary averaged problem has eigenvalues on the imaginary axis (the critical case)
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
Method of measuring a liquid pool volume
Garcia, G.V.; Carlson, N.M.; Donaldson, A.D.
1991-03-19
A method of measuring a molten metal liquid pool volume and in particular molten titanium liquid pools is disclosed, including the steps of (a) generating an ultrasonic wave at the surface of the molten metal liquid pool, (b) shining a light on the surface of a molten metal liquid pool, (c) detecting a change in the frequency of light, (d) detecting an ultrasonic wave echo at the surface of the molten metal liquid pool, and (e) computing the volume of the molten metal liquid. 3 figures.
International Nuclear Information System (INIS)
Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K
2015-01-01
Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly
MIT extraction method for measuring average subchannel axial velocities in reactor assemblies
International Nuclear Information System (INIS)
Hawley, J.T.; Chiu, C.; Todreas, N.E.
1980-08-01
The MIT extraction method for obtaining flow split data for individual subchannels is described in detail. An analysis of the method is presented which shows that isokinetic values of the subchannel flow rates are obtained directly even though the method is non-isokinetic. Time saving methods are discussed for obtaining the average value of the interior region flow split parameter. An analysis of the method at low bundle flow rates indicates that there is no inherent low flow rate limitation on the method and suggests a way to obtain laminar flow split data
Directory of Open Access Journals (Sweden)
Björn eNitzsche
2015-06-01
Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Development and analysis of finite volume methods
International Nuclear Information System (INIS)
Omnes, P.
2010-05-01
This document is a synthesis of a set of works concerning the development and the analysis of finite volume methods used for the numerical approximation of partial differential equations (PDEs) stemming from physics. In the first part, the document deals with co-localized Godunov type schemes for the Maxwell and wave equations, with a study on the loss of precision of this scheme at low Mach number. In the second part, discrete differential operators are built on fairly general, in particular very distorted or nonconforming, bidimensional meshes. These operators are used to approach the solutions of PDEs modelling diffusion, electro and magneto-statics and electromagnetism by the discrete duality finite volume method (DDFV) on staggered meshes. The third part presents the numerical analysis and some a priori as well as a posteriori error estimations for the discretization of the Laplace equation by the DDFV scheme. The last part is devoted to the order of convergence in the L2 norm of the finite volume approximation of the solution of the Laplace equation in one dimension and on meshes with orthogonality properties in two dimensions. Necessary and sufficient conditions, relatively to the mesh geometry and to the regularity of the data, are provided that ensure the second-order convergence of the method. (author)
Application of the Value Averaging Investment Method on the US Stock Market
Directory of Open Access Journals (Sweden)
Martin Širůček
2015-01-01
Full Text Available The paper focuses on empirical testing and the use of the regular investment, particularly on the value averaging investment method on real data from the US stock market in the years 1990–2013. The 23-year period was chosen because of a consistently interesting situation in the market and so this regular investment method could be tested to see how it works in a bull (expansion period and a bear (recession period. The analysis is focused on results obtained by using this investment method from the viewpoint of return and risk on selected investment horizons (short-term 1 year, medium-term 5 years and long-term 10 years. The selected aim is reached by using the ratio between profit and risk. The revenue-risk profile is the ratio of the average annual profit rate measured for each investment by the internal rate of return and average annual risk expressed by selective standard deviation. The obtained results show that regular investment is suitable for a long investment horizon or the longer the investment horizon, the better the revenue-risk ratio (Sharpe ratio. According to the results obtained, specific investment recommendations are presented in the conclusion, e.g. if this investment method is suitable for a long investment period, if it is better to use value averaging for a growing, sinking or sluggish market, etc.
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
Directory of Open Access Journals (Sweden)
Kravtsenyuk Olga V
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.
Directory of Open Access Journals (Sweden)
Vladimir V. Lyubimov
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.
METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE
Directory of Open Access Journals (Sweden)
L. M. Aliomarov
2015-01-01
Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.
International Nuclear Information System (INIS)
Zhao, W.H.; Cox, S.F.J.
1980-07-01
In the NMR measurement of dynamic nuclear polarization, a volume average is obtained where the contribution from different parts of the sample is weighted according to the local intensity of the RF field component perpendicular to the large static field. A method of mapping this quantity is described. A small metallic object whose geometry is chosen to perturb the appropriate RF component is scanned through the region to be occupied by the sample. The response of the phase angle of the impedance of a tuned circuit comprising the NMR coil gives a direct measurement of the local weighting factor. The correlation between theory and experiment was obtained by using a circular coil. The measuring method, checked in this way, was then used to investigate the field profiles of practical coils which are required to be rectangular for a proposed experimental neutron polarizing filter. This method can be used to evaluate other practical RF coils. (author)
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix K and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression c = u^\\T \\tilde{K}u $, where \\tilde{K} is different from K...... the well known Reuss lower bound. [1] Bendsøe, M.P.; Sigmund, O. 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, H. K.; W. Malalasekera 1995: An introduction to Computational Fluid Dynamics: the Finite Volume Method. London: Longman...
Finite Volume Method for Unstructured Grid
International Nuclear Information System (INIS)
Casmara; Kardana, N.D.
1997-01-01
The success of a computational method depends on the solution algorithm and mesh generation techniques. cell distributions are needed, which allow the solution to be calculated over the entire body surface with sufficient accuracy. to handle the mesh generation for multi-connected region such as multi-element bodies, the unstructured finite volume method will be applied. the advantages of the unstructured meshes are it provides a great deal more flexibility for generating meshes about complex geometries and provides a natural setting for the use of adaptive meshing. the governing equations to be discretized are inviscid and rotational euler equations. Applications of the method will be evaluated on flow around single and multi-component bodies
Method for analysis of averages over transmission energy of resonance neutrons
International Nuclear Information System (INIS)
Komarov, A.V.; Luk'yanov, A.A.
1981-01-01
Experimental data on transmissions on iron specimens in different energy groups have been analyzed on the basis of an earlier developed theoretical model for the description of resonance neutron averages in transmission energy, as the functions of specimen thickness and mean resonance parameters. The parameter values obtained agree with the corresponding data evaluated in the theory of mean neutron cross sections. The method suggested for the transmission description permits to reproduce experimental results for any thicknesses of specimens [ru
Directory of Open Access Journals (Sweden)
О. О. Gritzay
2016-12-01
Full Text Available Development of the technique for determination of the total neutron cross sections from the measurements of sample transmission by filtered neutrons, scattered on hydrogen is described. One of the methods of the transmission determination TH52Cr from the measurements of 52Cr sample, using average energy shift method for filtered neutron beam is presented. Using two methods of the experimental data processing, one of which is presented in this paper (another in [1], there is presented a set of transmissions, obtained for different samples and for different measurement angles. Two methods are fundamentally different; therefore, we can consider the obtained processing results, using these methods as independent. In future, obtained set of transmissions is planned to be used for determination of the parameters E0, Гn and R/ of the resonance 52Cr at the energy of 50 keV.
2010-07-01
... volume fraction of HAP in the actual solvent loss? 63.2854 Section 63.2854 Protection of Environment... AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants: Solvent... average volume fraction of HAP in the actual solvent loss? (a) This section describes the information and...
Research of isolated resonances using the average energy shift method for filtered neutron beam
International Nuclear Information System (INIS)
Gritzay, O.O.; Grymalo, A.K.; Kolotyi, V.V.; Mityushkin, O.O.; Venediktov, V.M.
2010-01-01
This work is devoted to detailed description of one of the research directions in the Neutron Physics Department (NPD), namely, to research of resonance parameters of isolated nuclear level at the filtered neutron beam on the horizontal experimental channel HEC-8 of the WWR-M reactor. Research of resonance parameters is an actual problem nowadays. This is because there are the essential differences between the resonance parameter values in the different evaluated nuclear data library (ENDL) for many nuclei. Research of resonance parameter is possible due to the set of the neutron cross sections received at the same filter, but with the slightly shifted filter average energy. The shift of the filter average energy is possible by several processes. In this work this shift is realized by neutron energy dependence on scattering angle. This method is provided by equipment.
A primal sub-gradient method for structured classification with the averaged sum loss
Directory of Open Access Journals (Sweden)
Mančev Dejan
2014-12-01
Full Text Available We present a primal sub-gradient method for structured SVM optimization defined with the averaged sum of hinge losses inside each example. Compared with the mini-batch version of the Pegasos algorithm for the structured case, which deals with a single structure from each of multiple examples, our algorithm considers multiple structures from a single example in one update. This approach should increase the amount of information learned from the example. We show that the proposed version with the averaged sum loss has at least the same guarantees in terms of the prediction loss as the stochastic version. Experiments are conducted on two sequence labeling problems, shallow parsing and part-of-speech tagging, and also include a comparison with other popular sequential structured learning algorithms.
International Nuclear Information System (INIS)
Ge, Gen; Li, ZePeng
2016-01-01
A modified stochastic averaging method on single-degree-of-freedom (SDOF) oscillators under white noise excitations with strongly nonlinearity was proposed. Considering the existing approach dealing with strongly nonlinear SDOFs derived by Zhu and Huang [14, 15] is quite time consuming in calculating the drift coefficient and diffusion coefficients and the expressions of them are considerable long, the so-called He's energy balance method was applied to overcome the minor defect of the Zhu and Huang's method. The modified method can offer more concise approximate expressions of the drift and diffusion coefficients without weakening the accuracy of predicting the responses of the systems too much by giving an averaged frequency beforehand. Three examples, a cubic and quadratic nonlinearity coexisting oscillator, a quadratic nonlinear oscillator under external white noise excitations and an externally excited Duffing–Rayleigh oscillator, were given to illustrate the approach we proposed. The three examples were excited by the Gaussian white noise and the Gaussian colored noise separately. The stationary responses of probability density of amplitudes and energy, together with joint probability density of displacement and velocity are studied to verify the presented approach. The reliability of the systems were also investigated to offer further support. Digital simulations were carried out and the output of that are coincide with the theoretical approximations well.
A spectral measurement method for determining white OLED average junction temperatures
Zhu, Yiting; Narendran, Nadarajah
2016-09-01
The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole
2005-01-01
in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix $\\mathbf K$ and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression $ c = \\mathbf u^\\T \\tilde{\\mathbf K} \\mathbf u...... the arithmetic and harmonic average with the latter being the well known Reuss lower bound. [1] Bendsøe, MP and Sigmund, O 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, HK and Malalasekera, W 1995: An introduction to Computational Fluid Dynamics...
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
International Nuclear Information System (INIS)
Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.
1991-08-01
This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Method and apparatus for producing average magnetic well in a reversed field pinch
International Nuclear Information System (INIS)
Ohkawa, T.
1983-01-01
A magnetic well reversed field plasma pinch method and apparatus produces hot magnetically confined pinch plasma in a toroidal chamber having a major toroidal axis and a minor toroidal axis and a small aspect ratio, e.g. < 6. A pinch current channel within the plasma and at least one hyperbolic magnetic axis outside substantially all of the plasma form a region of average magnetic well in a region surrounding the plasma current channel. The apparatus is operated so that reversal of the safety factor q and of the toroidal magnetic field takes place within the plasma. The well-producing plasma cross section shape is produced by a conductive shell surrounding the shaped envelope and by coils. A shell is of copper or aluminium with non-conductive breaks, and is bonded to a thin aluminium envelope by silicone rubber. (author)
Daud, Shahidah Md; Ramli, Razamin; Kasim, Maznah Mat; Kayat, Kalsom; Razak, Rafidah Abd
2015-12-01
Malaysian Homestay is very unique. It is classified as Community Based Tourism (CBT). Homestay Programme which is a community events where a tourist stays together with a host family for a period of time and enjoying cultural exchange besides having new experiences. Homestay programme has booming the tourism industry since there is over 100 Homestay Programme currently being registered with the Ministry of Culture and Tourism Malaysia. However, only few Homestay Programme enjoying the benefits of success Homestay Programme. Hence, this article seeks to identify the critical success factors for a Homestay Programme in Malaysia. An Arithmetic Average method is utilized to further evaluate the identified success factors in a more meaningful way. The findings will help Homestay Programme function as a community development tool that manages tourism resources. Thus, help the community in improving local economy and creating job opportunities.
International Nuclear Information System (INIS)
Lykoudis, P.S.
1995-01-01
The method of Average Magnitude Analysis is a mixture of the Integral Method and the Order of Magnitude Analysis. The paper shows how the differential equations of conservation for steady-state, laminar, boundary layer flows are converted to a system of algebraic equations, where the result is a sum of the order of magnitude of each term, multiplied by, a weight coefficient. These coefficients are determined from integrals containing the assumed velocity and temperature profiles. The method is illustrated by applying it to the case of drag and heat transfer over an infinite flat plate. It is then applied to the case of natural convection over an infinite flat plate with and without the presence of a horizontal magnetic field, and subsequently to enclosures of aspect ratios of one or higher. The final correlation in this instance yields the Nusselt number as a function of the aspect ratio and the Rayleigh and Prandtl numbers. This correlation is tested against a wide range of small and large values of these parameters. 19 refs., 4 figs
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Sun; Jeong, Hyo Joon; Kim, Eun Han; Han, Moon Hee; Hwang, Won Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-09-15
This study analyzes the differences in the annual averaged atmospheric dispersion factor and ground deposition factor produced using two classification methods of atmospheric stability, which are based on a vertical temperature difference and the standard deviation of horizontal wind direction fluctuation. Daedeok and Wolsong nuclear sites were chosen for an assessment, and the meteorological data at 10 m were applied to the evaluation of atmospheric stability. The XOQDOQ software program was used to calculate atmospheric dispersion factors and ground deposition factors. The calculated distances were chosen at 400 m, 800 m, 1,200 m, 1,600 m, 2,400 m, and 3,200 m away from the radioactive material release points. All of the atmospheric dispersion factors generated using the atmospheric stability based on the vertical temperature difference were shown to be higher than those from the standard deviation of horizontal wind direction fluctuation. On the other hand, the ground deposition factors were shown to be same regardless of the classification method, as they were based on the graph obtained from empirical data presented in the Nuclear Regulatory Commission's Regulatory Guide 1.111, which is unrelated to the atmospheric stability for the ground level release. These results are based on the meteorological data collected over the course of one year at the specified sites; however, the classification method of atmospheric stability using the vertical temperature difference is expected to be more conservative.
International Nuclear Information System (INIS)
Hu Lin; Cui Wei; Shi Hanwen; Tian Yingping; Wang Weigang; Feng Yanguang; Huang Xueyan; Liu Zhisheng
2003-01-01
Objective: To compare the relative accuracy of three methods measuring left ventricular volume by X-ray ventriculography: single plane area-length method, biplane area-length method, and single-plane Simpson's method. Methods: Left ventricular casts were obtained within 24 hours after death from 12 persons who died from non-cardiac causes. The true left ventricular cast volume was measured by water displacement. The calculated volume of the casts was obtained with 3 angiographic methods, i.e., single-plane area-length method, biplane area-length method, and single-plane Simpson's method. Results: The actual average volume of left ventricular casts was (61.17±26.49) ml. The left ventricular volume was averagely (97.50±35.56) ml with single plane area-length method, (90.51±36.33) ml with biplane area-length method, and (65.00± 23.63) ml with single-plane Simpson's method. The left ventricular volumes calculated with single-plane and biplane area-length method were significantly larger than that the actual volumes (P 0.05). The left ventricular volumes calculated with single-plane and biplane area-length method were significantly larger than those calculated with single-plane Simpson's method (P 0.05). The over-estimation of left ventricular volume by single plane area-length method (36.34±17.98) ml and biplane area-length method (29.34±15.59) ml was more obvious than that calculated by single-plane Simpson's method (3.83±8.48) ml. Linear regression analysis showed that there was close correlations between left ventricular volumes calculated with single plane area-length method, biplane area-length method, Simpson's method and the true volume (all r>0.98). Conclusion: Single-plane Simpson's method is more accurate than single plane area-length method and biplane area-length method for left ventricular volume measurement; however, both the single-plane and biplane area-length methods could be used in clinical practice, especially in those imaging modality
International Nuclear Information System (INIS)
Chi, Pai-Chun Melinda; Mawlawi, Osama; Luo Dershan; Liao Zhongxing; Macapinlac, Homer A.; Pan Tinsu
2008-01-01
Purpose: Patient respiratory motion can cause image artifacts in positron emission tomography (PET) from PET/computed tomography (CT) and change the quantification of PET for thoracic patients. In this study, respiration-averaged CT (ACT) was used to remove the artifacts, and the changes in standardized uptake value (SUV) and gross tumor volume (GTV) were quantified. Methods and Materials: We incorporated the ACT acquisition in a PET/CT session for 216 lung patients, generating two PET/CT data sets for each patient. The first data set (PET HCT /HCT) contained the clinical PET/CT in which PET was attenuation corrected with a helical CT (HCT). The second data set (PET ACT /ACT) contained the PET/CT in which PET was corrected with ACT. We quantified the differences between the two datasets in image alignment, maximum SUV (SUV max ), and GTV contours. Results: Of the patients, 68% demonstrated respiratory artifacts in the PET HCT , and for all patients the artifact was removed or reduced in the corresponding PET ACT . The impact of respiration artifact was the worst for lesions less than 50 cm 3 and located below the dome of the diaphragm. For lesions in this group, the mean SUV max difference, GTV volume change, shift in GTV centroid location, and concordance index were 21%, 154%, 2.4 mm, and 0.61, respectively. Conclusion: This study benchmarked the differences between the PET data with and without artifacts. It is important to pay attention to the potential existence of these artifacts during GTV contouring, as such artifacts may increase the uncertainties in the lesion volume and the centroid location
Chemical Method of Urine Volume Measurement
Petrack, P.
1967-01-01
A system has been developed and qualified as flight hardware for the measurement of micturition volumes voided by crewmen during Gemini missions. This Chemical Urine Volume Measurement System (CUVMS) is used for obtaining samples of each micturition for post-flight volume determination and laboratory analysis for chemical constituents of physiological interest. The system is versatile with respect to volumes measured, with a capacity beyond the largest micturition expected to be encountered, and with respect to mission duration of inherently indefinite length. The urine sample is used for the measurement of total micturition volume by a tracer dilution technique, in which a fixed, predetermined amount of tritiated water is introduced and mixed into the voided urine, and the resulting concentration of the tracer in the sample is determined with a liquid scintillation spectrometer. The tracer employed does not interfere with the analysis for the chemical constituents of the urine. The CUVMS hardware consists of a four-way selector valve in which an automatically operated tracer metering pump is incorporated, a collection/mixing bag, and tracer storage accumulators. The assembled system interfaces with a urine receiver at the selector valve inlet, sample bags which connect to the side of the selector valve, and a flexible hose which carries the excess urine to the overboard drain connection. Results of testing have demonstrated system volume measurement accuracy within the specification limits of +/-5%, and operating reliability suitable for system use aboard the GT-7 mission, in which it was first used.
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported
Method for the evaluation of a average glandular dose in mammography
International Nuclear Information System (INIS)
Okunade, Akintunde Akangbe
2006-01-01
This paper concerns a method for accurate evaluation of average glandular dose (AGD) in mammography. At different energies, the interactions of photons with tissue are not uniform. Thus, optimal accuracy in the estimation of AGD is achievable when the evaluation is carried out using the normalized glandular dose values, g(x,E), that are determined for each (monoenergetic) x-ray photon energy, E, compressed breast thickness (CBT), x, breast glandular composition, and data on photon energy distribution of the exact x-ray beam used in breast imaging. A generalized model for the values of g(x,E) that is for any arbitrary CBT ranging from 2 to 9 cm (with values that are not whole numbers inclusive, say, 4.2 cm) was developed. Along with other dosimetry formulations, this was integrated into a computer software program, GDOSE.FOR, that was developed for the evaluation of AGD received from any x-ray tube/equipment (irrespective of target-filter combination) of up to 50 kVp. Results are presented which show that the implementation of GDOSE.FOR yields values of normalized glandular dose that are in good agreement with values obtained from methodologies reported earlier in the literature. With the availability of a portable device for real-time acquisition of spectra, the model and computer software reported in this work provide for the routine evaluation of AGD received by a specific woman of known age and CBT
Kemaneci, E.H.; Carbone, E.A.D.; Booth, J.P.; Graef, W.A.A.D.; Dijk, van J.; Kroesen, G.M.W.
An inductively coupled radio-frequency plasma in chlorine is investigated via a global (volume-averaged) model, both in continuous and square wave modulated power input modes. After the power is switched off (in a pulsed mode) an ion–ion plasma appears. In order to model this phenomenon, a novel
Design of a micro-irrigation system based on the control volume method
Directory of Open Access Journals (Sweden)
Chasseriaux G.
2006-01-01
Full Text Available A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fl uid hydrodynamicʼs are applied. Control volume method is an iterative method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average fl ow rate of the network. The design consists of exploring an economical and effi cient network to deliver uniformly the input fl ow rate for all emitters. This program permitted the design of a large complex network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for lateral design, the results from which were validated by other methods as fi nite element method, so it permits to determine the optimal design for such micro-irrigation network
Directory of Open Access Journals (Sweden)
Konings Maurits K
2012-08-01
Full Text Available Abstract Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP is presented, that measures the average stroke volume (SV for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo. These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds. The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling
Energy Technology Data Exchange (ETDEWEB)
Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.
2011-02-15
Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Limit cycles from a cubic reversible system via the third-order averaging method
Directory of Open Access Journals (Sweden)
Linping Peng
2015-04-01
Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.
Thermodynamic Integration Methods, Infinite Swapping and the Calculation of Generalized Averages
Doll, J. D.; Dupuis, P.; Nyquist, P.
2016-01-01
In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.
Robinson, J C; Luft, H S
1985-12-01
A variety of recent proposals rely heavily on market forces as a means of controlling hospital cost inflation. Sceptics argue, however, that increased competition might lead to cost-increasing acquisitions of specialized clinical services and other forms of non-price competition as means of attracting physicians and patients. Using data from hospitals in 1972 we analyzed the impact of market structure on average hospital costs, measured in terms of both cost per patient and cost per patient day. Under the retrospective reimbursement system in place at the time, hospitals in more competitive environments exhibited significantly higher costs of production than did those in less competitive environments.
Acker, James G.; Uz, Stephanie Schollaert; Shen, Suhung; Leptoukh, Gregory G.
2010-01-01
Application of appropriate spatial averaging techniques is crucial to correct evaluation of ocean color radiometric data, due to the common log-normal or mixed log-normal distribution of these data. Averaging method is particularly crucial for data acquired in coastal regions. The effect of averaging method was markedly demonstrated for a precipitation-driven event on the U.S. Northeast coast in October-November 2005, which resulted in export of high concentrations of riverine colored dissolved organic matter (CDOM) to New York and New Jersey coastal waters over a period of several days. Use of the arithmetic mean averaging method created an inaccurate representation of the magnitude of this event in SeaWiFS global mapped chl a data, causing it to be visualized as a very large chl a anomaly. The apparent chl a anomaly was enhanced by the known incomplete discrimination of CDOM and phytoplankton chlorophyll in SeaWiFS data; other data sources enable an improved characterization. Analysis using the geometric mean averaging method did not indicate this event to be statistically anomalous. Our results predicate the necessity of providing the geometric mean averaging method for ocean color radiometric data in the Goddard Earth Sciences DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni).
A new method to estimate the atomic volume of ternary intermetallic compounds
International Nuclear Information System (INIS)
Pani, M.; Merlo, F.
2011-01-01
The atomic volume of an A x B y C z ternary intermetallic compound can be calculated starting from volumes of some proper A-B, A-C and B-C binary phases. The three methods by Colinet, Muggianu and Kohler, originally used to estimate thermodynamic quantities, and a new method here proposed, were tested to derive volume data in eight systems containing 91 ternary phases with the known structure. The comparison between experimental and calculated volume values shows the best agreement both for the Kohler method and for the new proposed procedure. -- Graphical abstract: Synopsys: the volume of a ternary intermetallic compound can be calculated starting from volumes of some binary phases, selected by the methods of Colinet, Muggianu, Kohler and a new method proposed here. The so obtained values are compared with the experimental ones for eight ternary systems. Display Omitted Research highlights: → The application of some thermodinamic methods to a crystallochemical problem. → The prevision of the average atomic volume of ternary intermetallic phases. → The proposal of a new procedure to select the proper starting set of binary phases.
Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks
Directory of Open Access Journals (Sweden)
Shen-Chun Wu
2003-01-01
Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.
Directory of Open Access Journals (Sweden)
Okuda Miyuki
2012-09-01
Full Text Available Abstract Introduction We were able to treat a patient with acute exacerbation of chronic obstructive pulmonary disease who also suffered from sleep-disordered breathing by using the average volume-assured pressure support mode of a Respironics V60 Ventilator (Philips Respironics: United States. This allows a target tidal volume to be set based on automatic changes in inspiratory positive airway pressure. This removed the need to change the noninvasive positive pressure ventilation settings during the day and during sleep. The Respironics V60 Ventilator, in the average volume-assured pressure support mode, was attached to our patient and improved and stabilized his sleep-related hypoventilation by automatically adjusting force to within an acceptable range. Case presentation Our patient was a 74-year-old Japanese man who was hospitalized for treatment due to worsening of dyspnea and hypoxemia. He was diagnosed with acute exacerbation of chronic obstructive pulmonary disease and full-time biphasic positive airway pressure support ventilation was initiated. Our patient was temporarily provided with portable noninvasive positive pressure ventilation at night-time following an improvement in his condition, but his chronic obstructive pulmonary disease again worsened due to the recurrence of a respiratory infection. During the initial exacerbation, his tidal volume was significantly lower during sleep (378.9 ± 72.9mL than while awake (446.5 ± 63.3mL. A ventilator that allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range was attached in average volume-assured pressure support mode, improving his sleep-related hypoventilation, which is often associated with the use of the Respironics V60 Ventilator. Polysomnography performed while our patient was on noninvasive positive pressure ventilation revealed obstructive sleep apnea syndrome (apnea-hypopnea index = 14, suggesting that his chronic
The spectral volume method as applied to transport problems
International Nuclear Information System (INIS)
McClarren, Ryan G.
2011-01-01
We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)
Bernhardt, Jase; Carleton, Andrew M.
2018-05-01
The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.
Investigation of 65Cu by means of the average resonance proton capture method
International Nuclear Information System (INIS)
Erlandsson, B.; Nilsson, K.; Piotrowski, J.
1979-01-01
The 64 Ni(p,γ) 65 Cu reaction has been studied in the proton energy range E sub(p) = 2.05 - 2.55 MeV. The gamma-ray spectra were recorded with a three-crystal pair spectrometer at proton energy differences of 19 keV covering the proton energy range. An average gamma-ray spectrum was formed by adding all the individual spectra after proper adjustment as a result of the alterations in proton energy. The intensities of the gamma rays to final states with known J sup(π)-values were tested against theoretical calculations based on the Hauser-Feshbach theory with good results and these made it possible to deduce further J sup(π)-values. (author)
Comparison of different precondtioners for nonsymmtric finite volume element methods
Energy Technology Data Exchange (ETDEWEB)
Mishev, I.D.
1996-12-31
We consider a few different preconditioners for the linear systems arising from the discretization of 3-D convection-diffusion problems with the finite volume element method. Their theoretical and computational convergence rates are compared and discussed.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
The efficiency of the centroid method compared to a simple average
DEFF Research Database (Denmark)
Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke
Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....
Three-dimensional reconstruction volume: a novel method for volume measurement in kidney cancer.
Durso, Timothy A; Carnell, Jonathan; Turk, Thomas T; Gupta, Gopal N
2014-06-01
The role of volumetric estimation is becoming increasingly important in the staging, management, and prognostication of benign and cancerous conditions of the kidney. We evaluated the use of three-dimensional reconstruction volume (3DV) in determining renal parenchymal volumes (RPV) and renal tumor volumes (RTV). We compared 3DV with the currently available methods of volume assessment and determined its interuser reliability. RPV and RTV were assessed in 28 patients who underwent robot-assisted laparoscopic partial nephrectomy for kidney cancer. Patients with a preoperative creatinine level of kidney pre- and postsurgery overestimated 3D reconstruction volumes by 15% to 102% and 12% to 101%, respectively. In addition, volumes obtained from 3DV displayed high interuser reliability regardless of experience. 3DV provides a highly reliable way of assessing kidney volumes. Given that 3DV takes into account visible anatomy, the differences observed using previously published methods can be attributed to the failure of geometry to accurately approximate kidney or tumor shape. 3DV provides a more accurate, reproducible, and clinically useful tool for urologists looking to improve patient care using analysis related to volume.
Study of runaway electrons using the conditional average sampling method in the Damavand tokamak
Energy Technology Data Exchange (ETDEWEB)
Pourshahab, B., E-mail: bpourshahab@gmail.com [University of Isfahan, Department of Nuclear Engineering, Faculty of Advance Sciences and Technologies (Iran, Islamic Republic of); Sadighzadeh, A. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of); Abdi, M. R., E-mail: r.abdi@phys.ui.ac.ir [University of Isfahan, Department of Physics, Faculty of Science (Iran, Islamic Republic of); Rasouli, C. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of)
2017-03-15
Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail–Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.
Directory of Open Access Journals (Sweden)
Ling Kang
2017-03-01
Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.
Computational Methods in Stochastic Dynamics Volume 2
Stefanou, George; Papadopoulos, Vissarion
2013-01-01
The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology. This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...
International Nuclear Information System (INIS)
Lerner, A.M.
1986-01-01
The first step towards evaluation of the neutron flux throughout a fuel cluster usually consists of obtaining the multigroup flux distribution in the average pin cell and in the circular outside system of shroud and bulk moderator. Here, an application of the so-called heterogeneous response method (HRM) is described to find this multigroup flux. The rather complex geometry is reduced to a microsystem, the average pin cell, and the outside or macrosystem of shroud and bulk moderator. In each of these systems, collision probabilities are used to obtain their response fluxes caused by sources and in-currents. The two systems are then coupled by cosine currents across that fraction of the average pin-cell boundary, called 'window', that represents the average common boundary between pin cells and the outside system. (author)
EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.
Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T
2018-01-01
The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hydrothermal analysis in engineering using control volume finite element method
Sheikholeslami, Mohsen
2015-01-01
Control volume finite element methods (CVFEM) bridge the gap between finite difference and finite element methods, using the advantages of both methods for simulation of multi-physics problems in complex geometries. In Hydrothermal Analysis in Engineering Using Control Volume Finite Element Method, CVFEM is covered in detail and applied to key areas of thermal engineering. Examples, exercises, and extensive references are used to show the use of the technique to model key engineering problems such as heat transfer in nanofluids (to enhance performance and compactness of energy systems),
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
2010-01-01
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...
Volume Sculpting Using the Level-Set Method
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Christensen, Niels Jørgen
2002-01-01
In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
The volume of fluid method in spherical coordinates
Janse, A.M.C.; Janse, A.M.C.; Dijk, P.E.; Kuipers, J.A.M.
2000-01-01
The volume of fluid (VOF) method is a numerical technique to track the developing free surfaces of liquids in motion. This method can, for example, be applied to compute the liquid flow patterns in a rotating cone reactor. For this application a spherical coordinate system is most suited. The novel
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
International Nuclear Information System (INIS)
George, J.L.
1988-04-01
One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs
New simple method for fast and accurate measurement of volumes
International Nuclear Information System (INIS)
Frattolillo, Antonio
2006-01-01
A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The
Cellwise conservative unsplit advection for the volume of fluid method
DEFF Research Database (Denmark)
Comminal, Raphaël; Spangenberg, Jon; Hattel, Jesper Henri
2015-01-01
We present a cellwise conservative unsplit (CCU) advection scheme for the volume of fluid method (VOF) in 2D. Contrary to other schemes based on explicit calculations of the flux balances, the CCU advection adopts a cellwise approach where the pre-images of the control volumes are traced......-overlapping donating regions and pre-images with conforming edges to their neighbors, resulting in the conservativeness and the boundedness (liquid volume fraction inside the interval [0, 1]) of the CCU advection scheme. Finally, the update of the liquid volume fractions is computed from the intersections of the pre......-image polygons with the reconstructed interfaces. The CCU scheme is tested on several benchmark tests for the VOF advection, together with the standard piecewise linear interface calculation (PLIC). The geometrical errors of the CCU compare favorably with other unsplit VOF-PLIC schemes. Finally, potential...
International Nuclear Information System (INIS)
Carreira, M.
1965-01-01
As a working method for determination of changes in molecular mass that may occur by irradiation (pyrolytic-radiolytic decomposition) of polyphenyl reactor coolants, a cryoscopic technique has been developed which associated the basic simplicity of Beckman's method with some experimental refinements taken out of the equilibrium methods. A total of 18 runs were made on samples of napthalene, biphenyl, and the commercial mixtures OM-2 (Progil) and Santowax-R (Monsanto), with an average deviation from the theoretical molecular mass of 0.6%. (Author) 7 refs
Directory of Open Access Journals (Sweden)
Yao-Ching Wang
Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.
Teaching Thermal Hydraulics & Numerical Methods: An Introductory Control Volume Primer
Energy Technology Data Exchange (ETDEWEB)
Lucas, D.S.
2004-10-03
This paper covers the basics of the implementation of the control volume method in the context of the Homogeneous Equilibrium Model (HEM)(T/H) code using the conservation equations of mass, momentum, and energy. This primer uses the advection equation as a template. The discussion will cover the basic equations of the control volume portion of the course in the primer, which includes the advection equation, numerical methods, along with the implementation of the various equations via FORTRAN into computer programs and the final result for a three equation HEM code and its validation.
Directory of Open Access Journals (Sweden)
Hyunwoo Lee
2018-01-01
Full Text Available Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG. Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1 the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2 the proposed method was compared with the previous SCG method that employs fewer-axis; and (3 the method was tested in various measurement conditions for a more practical application.
International Nuclear Information System (INIS)
Kim, Jin Sub; An, Seok Chan; Ko, Tae Kuk; Chu, Yong
2016-01-01
A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils
A Novel Grey Wave Method for Predicting Total Chinese Trade Volume
Directory of Open Access Journals (Sweden)
Kedong Yin
2017-12-01
Full Text Available The total trade volume of a country is an important way of appraising its international trade situation. A prediction based on trade volume will help enterprises arrange production efficiently and promote the sustainability of the international trade. Because the total Chinese trade volume fluctuates over time, this paper proposes a Grey wave forecasting model with a Hodrick–Prescott filter (HP filter to forecast it. This novel model first parses time series into long-term trend and short-term cycle. Second, the model uses a general GM (1,1 to predict the trend term and the Grey wave forecasting model to predict the cycle term. Empirical analysis shows that the improved Grey wave prediction method provides a much more accurate forecast than the basic Grey wave prediction method, achieving better prediction results than autoregressive moving average model (ARMA.
Copper Mountain conference on iterative methods: Proceedings: Volume 2
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-10-01
This volume (the second of two) contains information presented during the last two days of the Copper Mountain Conference on Iterative Methods held April 9-13, 1996 at Copper Mountain, Colorado. Topics of the sessions held these two days include domain decomposition, Krylov methods, computational fluid dynamics, Markov chains, sparse and parallel basic linear algebra subprograms, multigrid methods, applications of iterative methods, equation systems with multiple right-hand sides, projection methods, and the Helmholtz equation. Selected papers indexed separately for the Energy Science and Technology Database.
International Nuclear Information System (INIS)
Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel
2015-01-01
A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)
Different partial volume correction methods lead to different conclusions
DEFF Research Database (Denmark)
Greve, Douglas N; Salat, David H; Bowen, Spencer L
2016-01-01
A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...
International Nuclear Information System (INIS)
Um, Junshik; McFarquhar, Greg M.
2013-01-01
The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties (i.e., asymmetry parameter (g), single-scattering albedo (ω o ), extinction efficiency (Q ext ), scattering efficiency (Q sca ), absorption efficiency (Q abs ), and scattering phase function at scattering angles of 90° (P 11 (90°)), and 180° (P 11 (180°))) within a predefined accuracy level (i.e., 1.0%) were determined for four different nonspherical atmospheric ice crystal models (Gaussian random sphere, droxtal, budding Bucky ball, and column) with maximum dimension D=10μm using the Amsterdam discrete dipole approximation at λ=0.55, 3.78, and 11.0μm. The QMC required fewer orientations and less computing time than the lattice grid. The calculations of P 11 (90°) and P 11 (180°) required more orientations than the calculations of integrated scattering properties (i.e., g, ω o , Q ext , Q sca , and Q abs ) regardless of the orientation average scheme. The fewest orientations were required for calculating g and ω o . The minimum number of orientations and the corresponding computing time for single-scattering calculations decreased with an increase of wavelength, whereas they increased with the surface-area ratio that defines particle nonsphericity. -- Highlights: •The number of orientations required to calculate the average single-scattering properties of nonspherical ice crystals is investigated. •Single-scattering properties of ice crystals are calculated using ADDA. •Quasi Monte Carlo method is more efficient than lattice grid method for scattering calculations. •Single-scattering properties of ice crystals depend on a newly defined parameter called surface area ratio
International Nuclear Information System (INIS)
Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro
2012-01-01
The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.
Directory of Open Access Journals (Sweden)
Amjad Ali
2015-01-01
Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.
A method of measuring a molten metal liquid pool volume
Garcia, G.V.; Carlson, N.M., Donaldson, A.D.
1990-12-12
A method of measuring a molten metal liquid pool volume and in particular molten titanium liquid pools, including the steps of (a) generating an ultrasonic wave at the surface of the molten metal liquid pool, (b) shining a light on the surface of a molten metal liquid pool, (c) detecting a change in the frequency of light, (d) detecting an ultrasonic wave echo at the surface of the molten metal liquid pool, and (e) computing the volume of the molten metal liquid. 3 figs.
International Nuclear Information System (INIS)
George, J.L.
1986-04-01
The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell
Iterative algorithm for the volume integral method for magnetostatics problems
International Nuclear Information System (INIS)
Pasciak, J.E.
1980-11-01
Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived
Copper Mountain conference on iterative methods: Proceedings: Volume 1
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-10-01
This volume (one of two) contains information presented during the first three days of the Copper Mountain Conference on Iterative Methods held April 9-13, 1996 at Copper Mountain, Colorado. Topics of the sessions held these three days included nonlinear systems, parallel processing, preconditioning, sparse matrix test collections, first-order system least squares, Arnoldi`s method, integral equations, software, Navier-Stokes equations, Euler equations, Krylov methods, and eigenvalues. The top three papers from a student competition are also included. Selected papers indexed separately for the Energy Science and Technology Database.
Directory of Open Access Journals (Sweden)
Liang Xue
2018-04-01
Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.
Directory of Open Access Journals (Sweden)
Alexie M. F. Heimburger
2017-06-01
Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.
Pleural liquid clearance rate measured in awake sheep by the volume of dilution method
International Nuclear Information System (INIS)
Broaddus, V.C.; Wiener-Kronish, J.P.; Berthiaume, Y.; Staub, N.C.
1986-01-01
The authors reported 24h clearance of mock pleural effusions measured terminally in sheep. To measure effusion volume at different times in the same sheep, they injected 111 In-transferrin and measured its dilution. In 5 sheep with effusions of known sizes, the method was accurate to +/-10%. In 5 awake sheep, the authors injected 10 ml/kg of a 1% protein solution via a non-penetrating rib capsule. At 6h, the authors measured the volume by the dilution method and at 24h by direct recovery. The clearance rate in each animal was constant at 2.9-6.0%/h (average 4.8 +/- 1.3%/h). This new method gives a reliable two point clearance rate and requires fewer animals
Radionuclide method for blood volume determination in kidneys
International Nuclear Information System (INIS)
Trindev, P.; Nikolov, D.; Shejretova, E.; Garcheva-Tsacheva, M.
1989-01-01
The method is applied in nephrology for diagnosing changes in blood circulation of the kidneys. The blood volume of each kidney is determined separately by perfusion angioscintigraphy (PAS) with improved accuracy. The method consists in intravenous injection of 300-450 MBq 99m Tc for in-vivo labelling of the erythrocytes. About 30 images are registered every 2 sec, and through zones of interest perfusion histograms of kidneys are derived. Ten minutes later kidneys images (one full-face and two profiles) are registered. Correction coefficients for kidneys depth are derived and the activities registered according to full-face images and amplitudes of perfusion histograms are corrected. The activity of 1 ml blood is determined from blood sample of the patient. The blood volume of each kidney is expressed as a ratio of the activity corrected for background and depth and the activity of 1 ml blood of the sample. 1 claim
Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi
2015-05-01
Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.
International Nuclear Information System (INIS)
Wang, Y.; Karolinska Hospital and Karolinska Inst., Stockholm; Jacobsson, H.; Jacobson, S.H.; Kimiaei, S.; Larsson, S.A.
1995-01-01
The distrubution volume of an organ may have a clinical impact in many cases and various methods have been designed to make volume assessments. In this paper, we describe a new method for delineation of the distribution outline and volume determination. The method is based on smoothing, differentiation, image relaxation and voxel counting of single photon emission computer tomography (SPECT) image sets with 3-D operators. A special routine corrects for the inherent thickness of the voxel-based outline. Phantom experiments, using a SPECT system with LEGP-collimator and a 64x64 acquisition matrix with 6.3x6.3 mm 2 pixel size, demonstrated good correlation between the measured and the true volumes. For volumes larger than 120 cc the correlation coefficient was 0.9999 with SE 1.0 cc and an average relative deviation of 0.49%. For volumes below 120 cc, the accuracy was impaired due to low resolution power. By improving the system spatial resolution with an LEHR-collimator and a smaller pixel-size (4.1x4.1 mm 2 ), good accuracy was achieved also for volumes in the range from 3 to 120 cc. Measurements of 15 differently shaped phantoms of volumes between 3 and 104 cc demonstrated high correlation between measured and true volumes: R=0.9921 and SE=0.74 cc (5.3%). For volumes as small as 3 and 5 cc, the difference between the true and the assessed volume was 0.6 cc. The reproducibility of the method was within 3% for volumes above 120 cc and within 7% for volumes below. Due to this accuracy, we conclude that the method can be applied for various clinical routine and research applications using SPECT. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Yu, Lifeng, E-mail: yu.lifeng@mayo.edu; Vrieze, Thomas J.; Leng, Shuai; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-05-15
Purpose: The spatial resolution of iterative reconstruction (IR) in computed tomography (CT) is contrast- and noise-dependent because of the nonlinear regularization. Due to the severe noise contamination, it is challenging to perform precise spatial-resolution measurements at very low-contrast levels. The purpose of this study was to measure the spatial resolution of a commercially available IR method using ensemble-averaged images acquired from repeated scans. Methods: A low-contrast phantom containing three rods (7, 14, and 21 HU below background) was scanned on a 128-slice CT scanner at three dose levels (CTDI{sub vol} = 16, 8, and 4 mGy). Images were reconstructed using two filtered-backprojection (FBP) kernels (B40 and B20) and a commercial IR method (sinogram affirmed iterative reconstruction, SAFIRE, Siemens Healthcare) with two strength settings (I40-3 and I40-5). The same scan was repeated 100 times at each dose level. The modulation transfer function (MTF) was calculated based on the edge profile measured on the ensemble-averaged images. Results: The spatial resolution of the two FBP kernels, B40 and B20, remained relatively constant across contrast and dose levels. However, the spatial resolution of the two IR kernels degraded relative to FBP as contrast or dose level decreased. For a given dose level at 16 mGy, the MTF{sub 50%} value normalized to the B40 kernel decreased from 98.4% at 21 HU to 88.5% at 7 HU for I40-3 and from 97.6% to 82.1% for I40-5. At 21 HU, the relative MTF{sub 50%} value decreased from 98.4% at 16 mGy to 90.7% at 4 mGy for I40-3 and from 97.6% to 85.6% for I40-5. Conclusions: A simple technique using ensemble averaging from repeated CT scans can be used to measure the spatial resolution of IR techniques in CT at very low contrast levels. The evaluated IR method degraded the spatial resolution at low contrast and high noise levels.
Development op finite volume methods for fluid dynamics
International Nuclear Information System (INIS)
Delcourte, S.
2007-09-01
We aim to develop a finite volume method which applies to a greater class of meshes than other finite volume methods, restricted by orthogonality constraints. We build discrete differential operators over the three staggered tessellations needed for the construction of the method. These operators verify some analogous properties to those of the continuous operators. At first, the method is applied to the Div-Curl problem, which can be viewed as a building block of the Stokes problem. Then, the Stokes problem is dealt with with various boundary conditions. It is well known that when the computational domain is polygonal and non-convex, the order of convergence of numerical methods is deteriorated. Consequently, we have studied how an appropriate local refinement is able to restore the optimal order of convergence for the Laplacian problem. At last, we have discretized the non-linear Navier-Stokes problem, using the rotational formulation of the convection term, associated to the Bernoulli pressure. With an iterative algorithm, we are led to solve a saddle-point problem at each iteration. We give a particular interest to this linear problem by testing some pre-conditioners issued from finite elements, which we adapt to our method. Each problem is illustrated by numerical results on arbitrary meshes, such as strongly non-conforming meshes. (author)
Directory of Open Access Journals (Sweden)
Don-Roger Parkinson
2016-02-01
Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Methods for obtaining a uniform volume concentration of implanted ions
International Nuclear Information System (INIS)
Reutov, V.F.
1995-01-01
Three simple practical methods of irradiations with high energy particles providing the conditions for obtaining a uniform volume concentration of the implanted ions in the massive samples are described in the present paper. Realization of the condition of two-sided irradiation of a plane sample during its rotation in the flux of the projectiles is the basis of the first method. The use of free air as a filter with varying absorbent ability due to movement of the irradiated sample along ion beam brought to the atmosphere is at the basis of the second method of uniform ion alloying. The third method for obtaining a uniform volume concentration of the implanted ions in a massive sample consists of irradiation of a sample through the absorbent filter in the shape of a foil curved according to the parabolic law moving along its surface. The first method is the most effective for obtaining a great number of the samples, for example, for mechanical tests, the second one - for irradiation in different gaseous media, and the third one - for obtaining high concentrations of the implanted ions under controlled (regulated) thermal and deformation conditions. 2 refs., 7 figs
Directory of Open Access Journals (Sweden)
A. Ziemann
2017-11-01
Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately
Ziemann, Astrid; Starke, Manuela; Schütze, Claudia
2017-11-01
An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single
Uddin, M. Maruf; Fuad, Muzaddid-E.-Zaman; Rahaman, Md. Mashiur; Islam, M. Rabiul
2017-12-01
With the rapid decrease in the cost of computational infrastructure with more efficient algorithm for solving non-linear problems, Reynold's averaged Navier-Stokes (RaNS) based Computational Fluid Dynamics (CFD) has been used widely now-a-days. As a preliminary evaluation tool, CFD is used to calculate the hydrodynamic loads on offshore installations, ships, and other structures in the ocean at initial design stages. Traditionally, wedges have been studied more than circular cylinders because cylinder section has zero deadrise angle at the instant of water impact, which increases with increase of submergence. In Present study, RaNS based commercial code ANSYS Fluent is used to simulate the water entry of a circular section at constant velocity. It is seen that present computational results were compared with experiment and other numerical method.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang
2016-08-16
To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Energy Technology Data Exchange (ETDEWEB)
Delcourte, S
2007-09-15
We aim to develop a finite volume method which applies to a greater class of meshes than other finite volume methods, restricted by orthogonality constraints. We build discrete differential operators over the three staggered tessellations needed for the construction of the method. These operators verify some analogous properties to those of the continuous operators. At first, the method is applied to the Div-Curl problem, which can be viewed as a building block of the Stokes problem. Then, the Stokes problem is dealt with with various boundary conditions. It is well known that when the computational domain is polygonal and non-convex, the order of convergence of numerical methods is deteriorated. Consequently, we have studied how an appropriate local refinement is able to restore the optimal order of convergence for the Laplacian problem. At last, we have discretized the non-linear Navier-Stokes problem, using the rotational formulation of the convection term, associated to the Bernoulli pressure. With an iterative algorithm, we are led to solve a saddle-point problem at each iteration. We give a particular interest to this linear problem by testing some pre-conditioners issued from finite elements, which we adapt to our method. Each problem is illustrated by numerical results on arbitrary meshes, such as strongly non-conforming meshes. (author)
Heo, Seo Weon; Kim, Hyungsuk
2010-05-01
An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.
A volume-based method for denoising on curved surfaces
Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas
2013-01-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
A volume-based method for denoising on curved surfaces
Biddle, Harry
2013-09-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
Directory of Open Access Journals (Sweden)
Péter Przemyslaw Ujma
2015-02-01
Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.
A method for determining average beach slope and beach slope variability for U.S. sandy coastlines
Doran, Kara S.; Long, Joseph W.; Overbeck, Jacquelyn R.
2015-01-01
The U.S. Geological Survey (USGS) National Assessment of Hurricane-Induced Coastal Erosion Hazards compares measurements of beach morphology with storm-induced total water levels to produce forecasts of coastal change for storms impacting the Gulf of Mexico and Atlantic coastlines of the United States. The wave-induced water level component (wave setup and swash) is estimated by using modeled offshore wave height and period and measured beach slope (from dune toe to shoreline) through the empirical parameterization of Stockdon and others (2006). Spatial and temporal variability in beach slope leads to corresponding variability in predicted wave setup and swash. For instance, seasonal and storm-induced changes in beach slope can lead to differences on the order of 1 meter (m) in wave-induced water level elevation, making accurate specification of this parameter and its associated uncertainty essential to skillful forecasts of coastal change. A method for calculating spatially and temporally averaged beach slopes is presented here along with a method for determining total uncertainty for each 200-m alongshore section of coastline.
A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.
Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A
2003-02-01
Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.
Teaching Thermal Hydraulics & Numerical Methods: An Introductory Control Volume Primer
Energy Technology Data Exchange (ETDEWEB)
D. S. Lucas
2004-10-01
A graduate level course for Thermal Hydraulics (T/H) was taught through Idaho State University in the spring of 2004. A numerical approach was taken for the content of this course since the students were employed at the Idaho National Laboratory and had been users of T/H codes. The majority of the students had expressed an interest in learning about the Courant Limit, mass error, semi-implicit and implicit numerical integration schemes in the context of a computer code. Since no introductory text was found the author developed notes taught from his own research and courses taught for Westinghouse on the subject. The course started with a primer on control volume methods and the construction of a Homogeneous Equilibrium Model (HEM) (T/H) code. The primer was valuable for giving the students the basics behind such codes and their evolution to more complex codes for Thermal Hydraulics and Computational Fluid Dynamics (CFD). The course covered additional material including the Finite Element Method and non-equilibrium (T/H). The control volume primer and the construction of a three-equation (mass, momentum and energy) HEM code are the subject of this paper . The Fortran version of the code covered in this paper is elementary compared to its descendants. The steam tables used are less accurate than the available commercial version written in C Coupled to a Graphical User Interface (GUI). The Fortran version and input files can be downloaded at www.microfusionlab.com.
Woodward, Bryan; Gossen, Nicole; Meadows, Jessica; Tomlinson, Mathew
2016-12-01
The World Health Organization laboratory manual for the examination of human semen suggests that an indirect measurement of semen volume by weighing (gravimetric method) is more accurate than a direct measure using a serological pipette. A series of experiments were performed to determine the level of discrepancy between the two methods using pipettes and a balance which had been calibrated to a traceable standard. The median weights of 1.0ml and 5.0ml of semen were 1.03 g (range 1.02-1.05 g) and 5.11 g (range 4.95-5.16 g), respectively, suggesting a density for semen between 1.03g and 1.04 g/ml. When the containers were re-weighed after the removal of 5.0 ml semen using a serological pipette, the mean residual loss was 0.12 ml (120 μl) or 0.12 g (median 100 μl, range 70-300 μl). Direct comparison of the volumetric and gravimetric methods in a total of 40 samples showed a mean difference of 0.25ml (median 0.32 ± 0.67ml) representing an error of 8.5%. Residual semen left in the container by weight was on average 0.11 g (median 0.10 g, range 0.05-0.19 g). Assuming a density of 1 g/ml then the average error between volumetric and gravimetric methods was approximately 8% (p gravimetric measurement of semen volume. Laboratories may therefore prefer to provide in-house quality assurance data in order to be satisfied that 'estimating' semen volume is 'fit for purpose' as opposed to assuming a lower uncertainty associated with the WHO recommended method.
National comparison on volume sample activity measurement methods
International Nuclear Information System (INIS)
Sahagia, M.; Grigorescu, E.L.; Popescu, C.; Razdolescu, C.
1992-01-01
A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Energy Technology Data Exchange (ETDEWEB)
Seo, Man Su; Park, Hana; Yoo, Don Gyu; Jeong, Sang Kwon [Cryogenic Engineering Laboratory, Department of Mechanical Engineering, KAIST, Daejeon (Korea, Republic of); Jung, Young Suk [Launcher Systems Development Team, Korea Aerospace Research Institute, Daejeon (Korea, Republic of)
2014-06-15
Measuring an exact amount of remaining cryogenic liquid propellant under microgravity condition is one of the important issues of rocket vehicle. A Pressure-Volume-Temperature (PVT) gauging method is attractive due to its minimal additional hardware and simple gauging process. In this paper, PVT gauging method using liquid nitrogen is investigated under microgravity condition with parabolic flight. A 9.2 litre metal cryogenic liquid storage tank containing approximately 30% of liquid nitrogen is pressurized by ambient temperature helium gas. During microgravity condition, the inside of the liquid tank becomes near-isothermal condition within 1 K difference indicated by 6 silicon diode sensors vertically distributed in the middle of the liquid tank. Helium injection with higher mass flow rate after 10 seconds of the waiting time results in successful measurements of helium partial pressure in the tank. Average liquid volume measurement error is within 11% of the whole liquid tank volume and standard deviation of errors is 11.9. As a result, the applicability of PVT gauging method to liquid.
International Nuclear Information System (INIS)
Seo, Man Su; Park, Hana; Yoo, Don Gyu; Jeong, Sang Kwon; Jung, Young Suk
2014-01-01
Measuring an exact amount of remaining cryogenic liquid propellant under microgravity condition is one of the important issues of rocket vehicle. A Pressure-Volume-Temperature (PVT) gauging method is attractive due to its minimal additional hardware and simple gauging process. In this paper, PVT gauging method using liquid nitrogen is investigated under microgravity condition with parabolic flight. A 9.2 litre metal cryogenic liquid storage tank containing approximately 30% of liquid nitrogen is pressurized by ambient temperature helium gas. During microgravity condition, the inside of the liquid tank becomes near-isothermal condition within 1 K difference indicated by 6 silicon diode sensors vertically distributed in the middle of the liquid tank. Helium injection with higher mass flow rate after 10 seconds of the waiting time results in successful measurements of helium partial pressure in the tank. Average liquid volume measurement error is within 11% of the whole liquid tank volume and standard deviation of errors is 11.9. As a result, the applicability of PVT gauging method to liquid
Qiu, J.; Khalloufi, S.; Martynenko, A.; Dalen, van G.; Schutyser, M.A.I.; Almeida-Rivera, C.
2015-01-01
Several experimental methods for measuring porosity, bulk density and volume reduction during drying of foodstuff are available. These methods include among others geometric dimension, volume displacement, mercury porosimeter, micro-CT, and NMR. However, data on their accuracy, sensitivity, and
Method of reducing the volume of radioactive waste
International Nuclear Information System (INIS)
Buckley, L.P.; Burrill, K.A.; Desjardins, C.D.; Salter, R.S.
1984-01-01
There is provided a method of reducing the volume of radioactive waste, comprising: pyrolyzing the radioactive waste in the interior of a vessel, while passing superheated steam through the vessel at a temperature in the range 500 to 700 degrees C, a pressure in the range 1.0 to 3.5 MPa, and at a flow rate in the range 4 to 50 mL/s/m 3 of the volume of the vessel interior, to cause pyrohydrolysis of the waste and to remove carbon-containing components of the pyrolyzed waste from the vessel as gaseous oxides, leaving an ash residue in the vessel. Entrained particles present with the gaseous oxides are filtered and acidic vapours present with the gaseous oxides are removed by solid sorbent. Steam and any organic substances present with the gaseous oxides are condensed and the ash is removed from the vessel. The radioactive waste may be deposited upon an upper screen in the vessel, so that a substantial portion of the pyrolysis of the radioactive waste takes place while the radioactive waste is on the upper screen, and pyrolyzed waste falls through the upper screen onto a lower screen, where another substantial portion of the pyrohydrolysis takes place. The ash residue falls through the lower screen
Liu, Q.; Lange, R.
2003-12-01
Ferric iron is an important component in magmatic liquids, especially in those formed at subduction zones. Although it has long been known that Fe3+ occurs in four-, five- and six-fold coordination in crystalline compounds, only recently have all three Fe3+ coordination sites been confirmed in silicate glasses utilizing XANES spectroscopy at the Fe K-edge (Farges et al., 2003). Because the density of a magmatic liquid is largely determined by the geometrical packing of its network-forming cations (e.g., Si4+, Al3+, Ti4+, and Fe3+), the capacity of Fe3+ to undergo composition-induced coordination change affects the partial molar volume of the Fe2O3 component, which must be known to calculate how the ferric-ferrous ratio in magmatic liquids changes with pressure. Previous work has shown that the partial molar volume of Fe2O3 (VFe2O3) varies between calcic vs. sodic silicate melts (Mo et al., 1982; Dingwell and Brearley, 1988; Dingwell et al., 1988). The purpose of this study is to extend the data set in order to search for systematic variations in VFe2O3 with melt composition. High temperature (867-1534° C) density measurements were performed on eleven liquids in the Na2O-Fe2O3-FeO-SiO2 (NFS) system and five liquids in the K2O-Fe2O3-FeO-SiO2 (KFS) system using Pt double-bob Archimedean method. The ferric-ferrous ratio in the sodic and potassic liquids at each temperature of density measurement were calculated from the experimentally calibrated models of Lange and Carmichael (1989) and Tangeman et al. (2001) respectively. Compositions range (in mol%) from 4-18 Fe2O3, 0-3 FeO, 12-39 Na2O, 25-37 K2O, and 43-78 SiO2. Our density data are consistent with those of Dingwell et al. (1988) on similar sodic liquids. Our results indicate that for all five KFS liquids and for eight of eleven NFS liquids, the partial molar volume of the Fe2O3 component is a constant (41.57 ñ 0.14 cm3/mol) and exhibits zero thermal expansivity (similar to that for the SiO2 component). This value
Method of volume-reducing processing for radioactive wastes
International Nuclear Information System (INIS)
Sato, Koei; Yamauchi, Noriyuki; Hirayama, Toshihiko.
1985-01-01
Purpose: To process the processing products of radioactive liquid wastes and burnable solid wastes produced from nuclear facilities into stable solidification products by heat melting. Method: At first, glass fiber wastes of contaminated air filters are charged in a melting furnace. Then, waste products obtained through drying, sintering, incineration, etc. are mixed with a proper amount of glass fibers and charged into the melting furnace. Both of the charged components are heated to a temperature at which the glass fibers are melted. The burnable materials are burnt out to provide a highly volume-reduced products. When the products are further heated to a temperature at which metals or metal oxides of a higher melting point than the glass fiber, the glass fibers and the metals or metal oxides are fused to each other to be combined in a molecular structure into more stabilized products. The products are excellent in strength, stability, durability and leaching resistance at ambient temperature. (Kamimura, M.)
Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team
2017-11-01
We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).
Viscous wing theory development. Volume 1: Analysis, method and results
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo
2011-01-01
This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.
Zhang, Yujing; Sun, Guoxiang; Hou, Zhifei; Yan, Bo; Zhang, Jing
2017-12-01
A novel averagely linear-quantified fingerprint method was proposed and successfully applied to monitor the quality consistency of alkaloids in powdered poppy capsule extractive. Averagely linear-quantified fingerprint method provided accurate qualitative and quantitative similarities for chromatographic fingerprints of Chinese herbal medicines. The stability and operability of the averagely linear-quantified fingerprint method were verified by the parameter r. The average linear qualitative similarity SL (improved based on conventional qualitative "Similarity") was used as a qualitative criterion in the averagely linear-quantified fingerprint method, and the average linear quantitative similarity PL was introduced as a quantitative one. PL was able to identify the difference in the content of all the chemical components. In addition, PL was found to be highly correlated to the contents of two alkaloid compounds (morphine and codeine). A simple flow injection analysis was developed for the determination of antioxidant capacity in Chinese Herbal Medicines, which was based on the scavenging of 2,2-diphenyl-1-picrylhydrazyl radical by antioxidants. The fingerprint-efficacy relationship linking chromatographic fingerprints and antioxidant activities was investigated utilizing orthogonal projection to latent structures method, which provided important pharmacodynamic information for Chinese herbal medicines quality control. In summary, quantitative fingerprinting based on averagely linear-quantified fingerprint method can be applied for monitoring the quality consistency of Chinese herbal medicines, and the constructed orthogonal projection to latent structures model is particularly suitable for investigating the fingerprint-efficacy relationship. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Energy Technology Data Exchange (ETDEWEB)
Walker, William C. [ORNL
2018-02-01
This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after
Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods
Directory of Open Access Journals (Sweden)
Dick Apronti
2016-12-01
Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.
Czech Academy of Sciences Publication Activity Database
Bultinck, P.; Cooper, D.L.; Ponec, Robert
2010-01-01
Roč. 114, č. 33 (2010), s. 8754-8763 ISSN 1089-5639 R&D Projects: GA ČR GA203/09/0118 Institutional research plan: CEZ:AV0Z40720504 Keywords : shared electron distribution index * domain averaged fermi holes * atoms in molecules Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.732, year: 2010
SOLA-VOF, 2-D Transient Hydrodynamic Using Fractional Volume of Fluid Method
International Nuclear Information System (INIS)
Nichols, B.D.; Hirt, C.W.; Hotchkiss, R.S.
1991-01-01
1 - Description of problem or function: SOLA-VOF is a program for the solution of two-dimensional transient fluid flow with free boundaries, based on the concept of a fractional volume of fluid (VOF). Its basic mode of operation is for single fluid calculations having multiple free surfaces. However, SOLA-VOF can also be used for calculations involving two fluids separated by a sharp interface. In either case, the fluids may be treated as incompressible or as having limited compressibility. Surface tension forces with wall adhesion are permitted in both cases. Internal obstacles may be defined by blocking out any desired combination of cells in the mesh, which is composed of rectangular cells of variable size. 2 - Method of solution: The basis of the SOLA-VOF method is the fractional volume of fluid scheme for tracking free boundaries. In this technique, a function F(x,y,t) is defined whose value is unity at any point occupied by fluid and zero elsewhere. When averaged over the cells of a computational mesh, the average value of F in a cell is equal to the fractional volume of the cell occupied by fluid. In particular, a unit value of F corresponds to a cell full of fluid, whereas a zero value indicates that the cell contains no fluid. Cells with F values between zero and one contain a free surface. SOLA-VOF uses an Eulerian mesh of rectangular cells having variable sizes. The fluid equations solved are the finite difference approximations of the Navier-Stokes equations. 3 - Restrictions on the complexity of the problem: The setting of array dimensions is controlled through PARAMETER statements
ACARP Project C10059. ACARP manual of modern coal testing methods. Volume 2: Appendices
Energy Technology Data Exchange (ETDEWEB)
Sakurovs, R.; Creelman, R.; Pohl, J.; Juniper, L. [CSIRO Energy Technology, Sydney, NSW (Australia)
2002-07-01
The Manual summarises the purpose, applicability, and limitations of a range of standard and modern coal testing methods that have potential to assist the coal company technologist to better evaluate coal performance. It is presented in two volumes. This second volume provides more detailed information regarding the methods discussed in Volume 1.
A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube
International Nuclear Information System (INIS)
Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.
2004-01-01
Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data
Well balanced finite volume methods for nearly hydrostatic flows
International Nuclear Information System (INIS)
Botta, N.; Klein, R.; Langenberg, S.; Luetzenkirchen, S.
2004-01-01
In numerical approximations of nearly hydrostatic flows, a proper representation of the dominant hydrostatic balance is of crucial importance: unbalanced truncation errors can induce unacceptable spurious motions, e.g., in dynamical cores of models for numerical weather prediction (NWP) in particular near steep topography. In this paper we develop a new strategy for the construction of discretizations that are 'well-balanced' with respect to dominant hydrostatics. The classical idea of formulating the momentum balance in terms of deviations of pressure from a balanced background distribution is realized here through local, time dependent hydrostatic reconstructions. Balanced discretizations of the pressure gradient and of the gravitation source term are achieved through a 'discrete Archimedes' buoyancy principle'. This strategy is applied to extend an explicit standard finite volume Godunov-type scheme for compressible flows with minimal modifications. The resulting method has the following features: (i) It inherits its conservation properties from the underlying base scheme. (ii) It is exactly balanced, even on curvilinear grids, for a large class of near-hydrostatic flows. (iii) It solves the full compressible flow equations without reference to a background state that is defined for an entire vertical column of air. (iv) It is robust with respect to details of the implementation, such as the choice of slope limiting functions, or the particularities of boundary condition discretizations
Design method for marine direct drive volume control ahead actuator
Directory of Open Access Journals (Sweden)
WANG Haiyang
2018-02-01
Full Text Available [Objectives] In order to reduce the size, weight and auxiliary system configuration of marine ahead actuators, this paper proposes a kind of direct drive volume control electro-hydraulic servo ahead actuator. [Methods] The protruding and indenting control of the servo oil cylinder are realized through the forward and reverse of the bidirectional working gear pump, and the flow matching valve implements the self-locking of the ahead actuator in the target position. The mathematical model of the ahead actuator is established, and an integral separation fuzzy PID controller designed. On this basis, using AMESim software to build a simulation model of the ahead actuator, and combined with testing, this paper completes an analysis of the control strategy research and dynamic and static performance of the ahead actuator. [Results] The experimental results agree well with the simulation results and verify the feasibility of the ahead actuator's design. [Conclusions] The research results of this paper can provide valuable references for the integration and miniaturization design of marine ahead actuators.
International Nuclear Information System (INIS)
Caudrelier, Jean-Michel; Vial, Stephane; Gibon, David; Kulik, Carine; Fournier, Charles; Castelain, Bernard; Coche-Dequeant, Bernard; Rousseau, Jean
2003-01-01
Purpose: Three-dimensional (3D) volume determination is one of the most important problems in conformal radiation therapy. Techniques of volume determination from tomographic medical imaging are usually based on two-dimensional (2D) contour definition with the result dependent on the segmentation method used, as well as on the user's manual procedure. The goal of this work is to describe and evaluate a new method that reduces the inaccuracies generally observed in the 2D contour definition and 3D volume reconstruction process. Methods and Materials: This new method has been developed by integrating the fuzziness in the 3D volume definition. It first defines semiautomatically a minimal 2D contour on each slice that definitely contains the volume and a maximal 2D contour that definitely does not contain the volume. The fuzziness region in between is processed using possibility functions in possibility theory. A volume of voxels, including the membership degree to the target volume, is then created on each slice axis, taking into account the slice position and slice profile. A resulting fuzzy volume is obtained after data fusion between multiorientation slices. Different studies have been designed to evaluate and compare this new method of target volume reconstruction and a classical reconstruction method. First, target definition accuracy and robustness were studied on phantom targets. Second, intra- and interobserver variations were studied on radiosurgery clinical cases. Results: The absolute volume errors are less than or equal to 1.5% for phantom volumes calculated by the fuzzy logic method, whereas the values obtained with the classical method are much larger than the actual volumes (absolute volume errors up to 72%). With increasing MRI slice thickness (1 mm to 8 mm), the phantom volumes calculated by the classical method are increasing exponentially with a maximum absolute error up to 300%. In contrast, the absolute volume errors are less than 12% for phantom
International Nuclear Information System (INIS)
McCall, K C; Jeraj, R
2007-01-01
A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles
International Nuclear Information System (INIS)
1983-01-01
Under the direction of the Cinematography and Photography Standards Committee, a British Standard method has been prepared for determining ISO speed and average gradient of direct-exposure medical and dental radiographic film/film-process combinations. The method determines the speed and gradient, i.e. contrast, of the X-ray films processed according to their manufacturer's recommendations. (U.K.)
Methods for determining enzymatic activity comprising heating and agitation of closed volumes
Thompson, David Neil; Henriksen, Emily DeCrescenzo; Reed, David William; Jensen, Jill Renee
2016-03-15
Methods for determining thermophilic enzymatic activity include heating a substrate solution in a plurality of closed volumes to a predetermined reaction temperature. Without opening the closed volumes, at least one enzyme is added, substantially simultaneously, to the closed volumes. At the predetermined reaction temperature, the closed volumes are agitated and then the activity of the at least one enzyme is determined. The methods are conducive for characterizing enzymes of high-temperature reactions, with insoluble substrates, with substrates and enzymes that do not readily intermix, and with low volumes of substrate and enzyme. Systems for characterizing the enzymes are also disclosed.
International Nuclear Information System (INIS)
Carreira, M.
1965-01-01
In order to reduce limitations of solubility, the cryoscopic method developed for benzene solutions of polyphenyl mixtures has been extended to diphenyl-ether solutions by introducing some modifications imposed by the physico-chemical properties of this solvent. The Nernsto theory of Beckman's method has been revised, taking into account the heat-transfer characteristics of the system, and the results of that analysis have been used to fix upon the design parameters of a cryoscopic apparatus for measurements on diphenyl-ether solutions. (Author) 9 refs
Evaluation of methods for MR imaging of human right ventricular heart volumes and mass
International Nuclear Information System (INIS)
Jauhiainen, T.; Jaervinen, V.M.; Hekali, P.E.
2002-01-01
Purpose: To assess the utility of two different imaging directions in the evaluation of human right ventricular (RV) heart volumes and mass with MR imaging; to compare breath-hold vs. non-breath-hold imaging in volume analysis; and to compare turbo inversion recovery imaging (TIR) with gradient echo imaging in RV mass measurement. Material and Methods: We examined 12 healthy volunteers (age 27-59 years). Breath-hold gradient echo MR imaging was performed in two imaging planes: 1) perpendicular to the RV inflow tract (RVIT view), and 2) in the transaxial view (TA view). The imaging was repeated in the TA view while the subjects were breathing freely. To analyze RV mass using TIR images, the RV was again imaged at end-diastole using the two views. The RV end-diastolic cavity (RVEDV) and muscle volume as well as end-systolic cavity volume (RVESV) were determined with the method of discs. All measurements were done blindly twice to assess repeatability of image analysis. To assess reproducibility of the measurements, 6 of the subjects were imaged twice at an interval of 5-9 weeks. Results: RVEDV averaged 133.2 ml, RVESV 61.5 ml and the RVmass 46.2 g in the RVIT view and 119.9 ml, 56.9 ml and 38.3 g in the TA view, respectively. The volumes obtained with breath-holding were slightly but not significantly smaller than the volumes obtained during normal breathing. There were no marked differences in the RV muscle mass obtained with gradient echo imaging compared to TIR imaging in either views. Repeatability of volume analysis was better in TA than RVIT view: the mean differences were 0.7±4.0 ml and 5.4±14.0 ml in end-diastole and 1.6±3.1 ml and 1.5±13.9 ml in end-systole, respectively. Repeatability of mass analysis was good in both TIR and cine images in the RVIT view but slightly better in TIR images: 0.5±2.4 g compared to 0.8±2.9 g in cine images. Reproducibility of imaging was good, mean differences for RVEDV and RVESV were 1.0±4.8 ml and 0.8±2.8 ml
Control for the Three-Phase Four-Wire Four-Leg APF Based on SVPWM and Average Current Method
Directory of Open Access Journals (Sweden)
Xiangshun Li
2015-01-01
Full Text Available A novel control method is proposed for the three-phase four-wire four-leg active power filter (APF to realize the accurate and real-time compensation of harmonic of power system, which combines space vector pulse width modulation (SVPWM with triangle modulation strategy. Firstly, the basic principle of the APF is briefly described. Then the harmonic and reactive currents are derived by the instantaneous reactive power theory. Finally simulation and experiment are built to verify the validity and effectiveness of the proposed method. The simulation results show that the response time for compensation is about 0.025 sec and the total harmonic distortion (THD of the source current of phase A is reduced from 33.38% before compensation to 3.05% with APF.
Institute of Scientific and Technical Information of China (English)
谢佑卿
2011-01-01
在系统合金科学框架中建立有关无序合金的平均摩尔性质(体积和势能)的函数.通过对这些函数进行推导,可以得到平均摩尔体积函数、偏摩尔体积函数及派生出与成分相关的函数.在组元的偏摩尔性质和平均摩尔性质之间的普适方程、差分方程、在偏摩尔性质和平均摩尔性质之间不同参数的约束方程和普适的Gibbs-Duhem公式.可以证明从合金平均摩尔性质的不同函数计算的偏摩尔性质是相等的,但总体来说偏摩尔性质不等于给定组元的平均摩尔性质,即偏摩尔性质不能代表相应组元的摩尔性质.通过计算Au-Ni系中组元的偏摩尔体积和平均原子体积以及合金的平均原子体积,证明所建立的公式和函数的正确性.%In the framework of systematic science of alloys,the average molar property (volume and potential energy) functions of disordered alloys were established.From these functions,the average molar property functions,partial molar property functions,derivative functions with respect to composition,general equation of relationship between partial and average molar properties of components,difference equation and constraining equation of different values between partial and average molar properties,as well as general Gibbs-Duhem formula were derived.It was proved that the partial molar properties calculated from various combinative functions of average molar properties of alloys are equal,but in general,the partial molar properties are not equal to the average molar properties of a given component.This means that the partial molar properties cannot represent the corresponding properties of the component.All the equations and functions established in this work were proved to be correct by calculating the results of partial and average atomic volumes of components as well as average atomic volumes of alloys in the Au-Ni system.
Directory of Open Access Journals (Sweden)
Qian Zhang
2014-01-01
Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume element method (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.
Method and apparatus for probing relative volume fractions
Jandrasits, Walter G.; Kikta, Thomas J.
1998-01-01
A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.
Energy Technology Data Exchange (ETDEWEB)
Venel, Y.; Garhi, H.; Baulieu, J.L.; Prunier-Aesch, C. [CHRU de Tours-Bretonneau, Service de Medecine Nucleaire, 37 - Tours (France); Muret, A. de [CHRU de Tours-Bretonneau, Service de Radiotherapie, 37 - Tours (France); Barillot, I. [CHRU de Tours-Bretonneau, Service d' Anatomopathologie, 37 - Tours (France)
2008-06-15
The {sup 18}F-F.D.G. PET has demonstrated its importance in oncology, for initial extension and efficacy of anti tumoral therapeutics. Several studies have attempted to prove its utility to define tumoral volumes for conformational radiotherapy in non small cell lung cancers. Some authors have suggested the use of threshold of tumor intensity uptake with 40 or 50% of maximal intensity. Black et al. have determined contouring with linear regression formula of mean semi-quantitative index of tumor uptake (standard uptake value): SUV{sub threshold} = 0.307 Sub{sub average} + 0.588. Nestle et al. have taken into account the background noise intensity and mean intensity of the tumor: I{sub threshold} = {beta} I{sub average} +I{sub noise} with {beta} 0.15. Our study was done in collaboration with Inserm U618 team and has compared volumes defined on PET scan defined according to different methods based on intensity or S.U.V. to the tumour volume determined on CT scan by radio physicist. We have compared those volumes with histological volume that we considered for reference. Four patients have been included. They had {sup 18}F-F.D.G. PET scan followed by complete tumoral removal surgery. Specific histological procedure allowed to define complete size of the tumor in re expanded lung. Comparatively to pathology, the volumes obtained using I{sub max} 40 and I{sub max} 50 are all underestimated. The volumes defined by Black's et al. method are under evaluated for the two largest tumours (15.8% to 22%) and overestimated for the two smallest ones (17.9 to 82.9%). Nestle's et al. method, using {beta} = 0.15, correctly estimates two tumor volumes over 2 cm, but overestimates the two small tumors (79.6 to 124%). Finally, the corrected Nestle's et al. formula (using {beta} = 0.264) overestimates three tumours. Volumes defined on CT scan by radio physicist are correct for one lesion, underestimated for one and overestimated for two other ones (44 and 179.5%). Nestle
Urban Run-off Volumes Dependency on Rainfall Measurement Method
DEFF Research Database (Denmark)
Pedersen, L.; Jensen, N. E.; Rasmussen, Michael R.
2005-01-01
Urban run-off is characterized with fast response since the large surface run-off in the catchments responds immediately to variations in the rainfall. Modeling such type of catchments is most often done with the input from very few rain gauges, but the large variation in rainfall over small areas...... resolutions and single gauge rainfall was fed to a MOUSE run-off model. The flow and total volume over the event is evaluated....
New method of assigning uncertainty in volume calibration
International Nuclear Information System (INIS)
Lechner, J.A.; Reeve, C.P.; Spiegelman, C.H.
1980-12-01
This paper presents a practical statistical overview of the pressure-volume calibration curve for large nuclear materials processing tanks. It explains the appropriateness of applying splines (piecewise polynomials) to this curve, and it presents an overview of the associated statistical uncertainties. In order to implement these procedures, a practical and portable FORTRAN IV program is provided along with its users' manual. Finally, the recommended procedure is demonstrated on actual tank data collected by NBS
Yu, Yi-Lin; Lee, Meei-Shyuan; Juan, Chun-Jung; Hueng, Dueng-Yuan
2013-08-01
The ABC/2 equation is commonly applied to measure the volume of intracranial hematoma. However, the precision of ABC/2 equation in estimating the tumor volume of acoustic neuromas is less addressed. The study is to evaluate the accuracy of the ABC/2 formula by comparing with planimetry method for estimating the tumor volumes. Thirty-two patients diagnosed with acoustic neuroma received contrast-enhanced magnetic resonance imaging of brain were recruited. The volume was calculated by the ABC/2 equation and planimetry method (defined as exact volume) at the same time. The 32 patients were divided into three groups by tumor volume to avoid volume-dependent overestimation (6 ml). The tumor volume by ABC/2 method was highly correlated to that calculated by planimetry method using linear regression analysis (R2=0.985). Pearson correlation coefficient (r=0.993, pABC/2 formula is an easy method in estimating the tumor volume of acoustic neuromas that is not inferior to planimetry method. Copyright © 2013 Elsevier B.V. All rights reserved.
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Directory of Open Access Journals (Sweden)
Joko Siswantoro
2014-01-01
Full Text Available Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
Supplier Portfolio Selection and Optimum Volume Allocation: A Knowledge Based Method
Aziz, Romana; Aziz, R.; van Hillegersberg, Jos; Kersten, W.; Blecker, T.; Luthje, C.
2010-01-01
Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case
Lung lesion doubling times: values and variability based on method of volume determination
International Nuclear Information System (INIS)
Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory
2008-01-01
Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs
A volume of fluid method based on multidimensional advection and spline interface reconstruction
International Nuclear Information System (INIS)
Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.
2004-01-01
A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps
Finite elements volumes methods: applications to the Navier-Stokes equations and convergence results
International Nuclear Information System (INIS)
Emonot, P.
1992-01-01
In the first chapter are described the equations modeling incompressible fluid flow and a quick presentation of finite volumes method. The second chapter is an introduction to the finite elements volumes method. The box model is described and a method adapted to Navier-Stokes problems is proposed. The third chapter shows a fault analysis of the finite elements volumes method for the Laplacian problem and some examples in one, two, three dimensional calculations. The fourth chapter is an extension of the error analysis of the method for the Navier-Stokes problem
Energy Technology Data Exchange (ETDEWEB)
Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{sub 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose
International Nuclear Information System (INIS)
Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D 95 , D 90 , D 50 , and D 2 of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Tuomikoski, Laura; Collan, Juhani; Keyrilaeinen, Jani; Visapaeae, Harri; Saarilahti, Kauko; Tenhunen, Mikko
2011-01-01
Background and purpose: To evaluate the benefits of adaptive radiotherapy for bladder cancer in decreasing irradiation of small bowel. Material and methods: Five patients with muscle invasive bladder cancer received adaptive radiotherapy to a total dose of 55.8-65 Gy with daily cone-beam computed tomography scanning. The whole bladder was treated to 45-50.4 Gy, followed by a partial bladder boost. The plan of the day was chosen from 3 to 4 pre-planned treatment plans according to the visible extent of bladder wall in cone-beam computed tomography images. Dose volume histograms for intestinal cavity volumes were constructed and compared with corresponding histograms calculated for conventional non-adaptive radiotherapy with single treatment plan of 2 cm CTV-PTV margins. CTV dose coverage in adaptive treatment technique was compared with CTV dose coverage in conventional radiotherapy. Results: The average volume of intestinal cavity receiving ≥45 Gy was reduced from 335 ± 106 cm 3 to 180 ± 113 cm 3 (1SD). The maximum volume of intestinal cavity spared at 45 Gy on a single patient was 240 cm 3 , while the minimum volume was 65 cm 3 . The corresponding reduction in average intestinal cavity volume receiving ≥45 Gy calculated for the whole bladder treatment only was 66 ± 36 cm 3 . CTV dose coverage was improved on two out of five patients and decreased on three patients. Conclusions: Adaptive radiotherapy considerably reduces dose to the small bowel, while maintaining the dose coverage of CTV at similar level when compared to the conventional treatment technique.
International Nuclear Information System (INIS)
Venel, Y.; Garhi, H.; Baulieu, J.L.; Prunier-Aesch, C.; Muret, A. de; Barillot, I.
2008-01-01
The 18 F-F.D.G. PET has demonstrated its importance in oncology, for initial extension and efficacy of anti tumoral therapeutics. Several studies have attempted to prove its utility to define tumoral volumes for conformational radiotherapy in non small cell lung cancers. Some authors have suggested the use of threshold of tumor intensity uptake with 40 or 50% of maximal intensity. Black et al. have determined contouring with linear regression formula of mean semi-quantitative index of tumor uptake (standard uptake value): SUV threshold = 0.307 Sub average + 0.588. Nestle et al. have taken into account the background noise intensity and mean intensity of the tumor: I threshold = β I average +I noise with β 0.15. Our study was done in collaboration with Inserm U618 team and has compared volumes defined on PET scan defined according to different methods based on intensity or S.U.V. to the tumour volume determined on CT scan by radio physicist. We have compared those volumes with histological volume that we considered for reference. Four patients have been included. They had 18 F-F.D.G. PET scan followed by complete tumoral removal surgery. Specific histological procedure allowed to define complete size of the tumor in re expanded lung. Comparatively to pathology, the volumes obtained using I max 40 and I max 50 are all underestimated. The volumes defined by Black's et al. method are under evaluated for the two largest tumours (15.8% to 22%) and overestimated for the two smallest ones (17.9 to 82.9%). Nestle's et al. method, using β = 0.15, correctly estimates two tumor volumes over 2 cm, but overestimates the two small tumors (79.6 to 124%). Finally, the corrected Nestle's et al. formula (using β = 0.264) overestimates three tumours. Volumes defined on CT scan by radio physicist are correct for one lesion, underestimated for one and overestimated for two other ones (44 and 179.5%). Nestle's et al. method seems to be the most accurate for tumours over 2 cm of
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Protection Parameters against the Cracks by the Method of Volume Compensation Dam
Directory of Open Access Journals (Sweden)
Bulatov Georgiy
2016-01-01
Full Text Available This article provides estimates the parameters of protection from cracking dam due to volume compensation method. This article discusses the method of compensation dam volume. This method allows calculating the settings of security causing cracks the dam. Presents graphs of horizontal deformations of elongation calculated surface along the length of the construction and in time. Showing horizontal stress distribution diagram in the ground around the pile in plan and in section. Given all the necessary formulas for the method of compensation of the dam volume.
Sadleir, R J; Zhang, S U; Tucker, A S; Oh, Sungho
2008-08-01
Electrical impedance tomography (EIT) is particularly well-suited to applications where its portability, rapid acquisition speed and sensitivity give it a practical advantage over other monitoring or imaging systems. An EIT system's patient interface can potentially be adapted to match the target environment, and thereby increase its utility. It may thus be appropriate to use different electrode positions from those conventionally used in EIT in these cases. One application that may require this is the use of EIT on emergency medicine patients; in particular those who have suffered blunt abdominal trauma. In patients who have suffered major trauma, it is desirable to minimize the risk of spinal cord injury by avoiding lifting them. To adapt EIT to this requirement, we devised and evaluated a new electrode topology (the 'hemiarray') which comprises a set of eight electrodes placed only on the subject's anterior surface. Images were obtained using a two-dimensional sensitivity matrix and weighted singular value decomposition reconstruction. The hemiarray method's ability to quantify bleeding was evaluated by comparing its performance with conventional 2D reconstruction methods using data gathered from a saline phantom. We found that without applying corrections to reconstructed images it was possible to estimate blood volume in a two-dimensional hemiarray case with an uncertainty of around 27 ml. In an approximately 3D hemiarray case, volume prediction was possible with a maximum uncertainty of around 38 ml in the centre of the electrode plane. After application of a QI normalizing filter, average uncertainties in a two-dimensional hemiarray case were reduced to about 15 ml. Uncertainties in the approximate 3D case were reduced to about 30 ml.
International Nuclear Information System (INIS)
Tomimoto, Shigehiro; Nakatani, Satoshi; Tanaka, Norio; Uematsu, Masaaki; Beppu, Shintaro; Nagata, Seiki; Hamada, Seiki; Takamiya, Makoto; Miyatake, Kunio
1995-01-01
Acoustic quantification (AQ: the real-time automated boundary detection system) allows instantaneous measurement of cardiac chamber volumes. The feasibility of this method was evaluated by comparing the left ventricular (LV) volumes obtained with AQ to those derived from ultrafast computed tomography (UFCT), which enables accurate measurements of LV volumes even in the presence of LV asynergy, in 23 patients (8 with ischemic heart disease, 5 with cardiomyopathy, 3 with valvular heart disease). Both LV end-diastolic and end-systolic volumes obtained with the AQ method were in good agreement with those obtained with UFCT (y=1.04χ-16.9, r=0.95; y=0.87χ+15.7, r=0.91; respectively). AQ was reliable even in the presence of LV asynergy. Interobserver variability for the AQ measurement was 10.2%. AQ provides a new, clinically useful method for real-time accurate estimation of the left ventricular volume. (author)
Energy Technology Data Exchange (ETDEWEB)
Tomimoto, Shigehiro; Nakatani, Satoshi; Tanaka, Norio; Uematsu, Masaaki; Beppu, Shintaro; Nagata, Seiki; Hamada, Seiki; Takamiya, Makoto; Miyatake, Kunio [National Cardiovascular Center, Suita, Osaka (Japan)
1995-01-01
Acoustic quantification (AQ: the real-time automated boundary detection system) allows instantaneous measurement of cardiac chamber volumes. The feasibility of this method was evaluated by comparing the left ventricular (LV) volumes obtained with AQ to those derived from ultrafast computed tomography (UFCT), which enables accurate measurements of LV volumes even in the presence of LV asynergy, in 23 patients (8 with ischemic heart disease, 5 with cardiomyopathy, 3 with valvular heart disease). Both LV end-diastolic and end-systolic volumes obtained with the AQ method were in good agreement with those obtained with UFCT (y=1.04{chi}-16.9, r=0.95; y=0.87{chi}+15.7, r=0.91; respectively). AQ was reliable even in the presence of LV asynergy. Interobserver variability for the AQ measurement was 10.2%. AQ provides a new, clinically useful method for real-time accurate estimation of the left ventricular volume. (author).
Quantification and variability in colonic volume with a novel magnetic resonance imaging method
DEFF Research Database (Denmark)
Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke
2015-01-01
Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...
Analysis of one-dimensional nonequilibrium two-phase flow using control volume method
International Nuclear Information System (INIS)
Minato, Akihiko; Naitoh, Masanori
1987-01-01
A one-dimensional numerical analysis model was developed for prediction of rapid flow transient behavior involving boiling. This model was based on six conservation equations of time averaged parameters of gas and liquid behavior. These equations were solved by using a control volume method with an explicit time integration. This model did not use staggered mesh scheme, which had been commonly used in two-phase flow analysis. Because void fraction and velocity of each phase were defined at the same location in the present model, effects of void fraction on phase velocity calculation were treated directly without interpolation. Though non-staggered mesh scheme was liable to cause numerical instability with zigzag pressure field, stability was achieved by employing the Godunov method. In order to verify the present analytical model, Edwards' pipe blow down and Zaloudek's initially subcooled critical two-phase flow experiments were analyzed. Stable solutions were obtained for rarefaction wave propagation with boiling and transient two-phase flow behavior in a broken pipe by using this model. (author)
Variation in Measurements of Transtibial Stump Model Volume A Comparison of Five Methods
Bolt, A.; de Boer-Wilzing, V. G.; Geertzen, J. H. B.; Emmelot, C. H.; Baars, E. C. T.; Dijkstra, P. U.
Objective: To determine the right moment for fitting the first prosthesis, it is necessary to know when the volume of the stump has stabilized. The aim of this study is to analyze variation in measurements of transtibial stump model volumes using the water immersion method, the Design TT system, the
Variant of a volume-of-fluid method for surface tension-dominant two ...
Indian Academy of Sciences (India)
2013-12-27
Dec 27, 2013 ... face tension-dominant two-phase flows are explained. ... for one particular fluid inside a cell as its material volume divided by the total ... the reconstructed interface and the velocity field, and the final part ..... Welch S W J and Wilson J 2000 A volume of fluid based method for fluid flows with phase change. J.
Critical length sampling: a method to estimate the volume of downed coarse woody debris
G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey
2010-01-01
In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...
Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows
Raman, Venkatramanan
A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.
International Nuclear Information System (INIS)
Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing
2017-01-01
Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of
Incineration method for volume reduction and disposal of transuranic waste
International Nuclear Information System (INIS)
Borham, B.M.
1985-01-01
The Process Experimental Pilot Plant (PREPP) at Idaho National Engineering Laboratory (INEL) is designed to process 7 TPD of transuranic (TRU) waste producing 8.5 TPD of cemented waste and 4100 ACFM of combustion gases with a volume reduction of up to 17:1. The waste and its container are shredded then fed to a rotary kiln heated to 1700 0 F, then cooled and classified by a trommel screen. The fine portion is mixed with a cement grout which is placed with the coarse portion in steel drums for disposal at the Waste Isolation Pilot Plant (WIPP). The kiln off-gas is reheated to 2000 0 F to destroy any remaining hydrocarbons and toxic volatiles. The gases are cooled and passed in a venturi scrubber to remove particulates and corrosive gases. The venturi off-gas is passed through a mist eliminator and is reheated to 50 0 F above the dew point prior to passing through a High Efficiency Particulate Air (HEPA) filter. The scrub solution is concentrated to 25% solids by an inertial filter. The sludge containing the combustion chemical contaminants is encapsulated with the residue of the incinerated waste
George, D. L.; Iverson, R. M.
2012-12-01
Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and
Volume Equalization Method for Land Grading Design: Uniform ...
African Journals Online (AJOL)
muğla üniversitesi
2011-05-23
May 23, 2011 ... *Corresponding author. E-mail: ... Land grading has been in practice for a long time, but land-grading ... method was based on least-squares theory and he showed its ... Srinisava (1996) developed a nonlinear optimization.
International Nuclear Information System (INIS)
Töger, Johannes; Carlsson, Marcus; Söderlind, Gustaf; Arheden, Håkan; Heiberg, Einar
2011-01-01
Functional and morphological changes of the heart influence blood flow patterns. Therefore, flow patterns may carry diagnostic and prognostic information. Three-dimensional, time-resolved, three-directional phase contrast cardiovascular magnetic resonance (4D PC-CMR) can image flow patterns with unique detail, and using new flow visualization methods may lead to new insights. The aim of this study is to present and validate a novel visualization method with a quantitative potential for blood flow from 4D PC-CMR, called Volume Tracking, and investigate if Volume Tracking complements particle tracing, the most common visualization method used today. Eight healthy volunteers and one patient with a large apical left ventricular aneurysm underwent 4D PC-CMR flow imaging of the whole heart. Volume Tracking and particle tracing visualizations were compared visually side-by-side in a visualization software package. To validate Volume Tracking, the number of particle traces that agreed with the Volume Tracking visualizations was counted and expressed as a percentage of total released particles in mid-diastole and end-diastole respectively. Two independent observers described blood flow patterns in the left ventricle using Volume Tracking visualizations. Volume Tracking was feasible in all eight healthy volunteers and in the patient. Visually, Volume Tracking and particle tracing are complementary methods, showing different aspects of the flow. When validated against particle tracing, on average 90.5% and 87.8% of the particles agreed with the Volume Tracking surface in mid-diastole and end-diastole respectively. Inflow patterns in the left ventricle varied between the subjects, with excellent agreement between observers. The left ventricular inflow pattern in the patient differed from the healthy subjects. Volume Tracking is a new visualization method for blood flow measured by 4D PC-CMR. Volume Tracking complements and provides incremental information compared to particle
Development of production methods of volume source by the resinous solution which has hardening
Motoki, R
2002-01-01
Volume sources is used for standard sources by radioactive measurement using Ge semiconductor detector of environmental sample, e.g. water, soil and etc. that require large volume. The commercial volume source used in measurement of the water sample is made of agar-agar, and that used in measurement of the soil sample is made of alumina powder. When the plastic receptacles of this two kinds of volume sources were damaged, the leakage contents cause contamination. Moreover, if hermetically sealing performance of volume source made of agar-agar fell, volume decrease due to an evaporation off moisture gives an error to radioactive measurement. Therefore, we developed the two type methods using unsaturated polyester resin, vinilester resin, their hardening agent and acrylicresin. The first type is due to dispersing the hydrochloric acid solution included the radioisotopes uniformly in each resin and hardening the resin. The second is due to dispersing the alumina powder absorbed the radioisotopes in each resin an...
A finite volume method for cylindrical heat conduction problems based on local analytical solution
Li, Wang
2012-10-01
A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.
A finite volume method for cylindrical heat conduction problems based on local analytical solution
Li, Wang; Yu, Bo; Wang, Xinran; Wang, Peng; Sun, Shuyu
2012-01-01
A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.
Caltrans Average Annual Daily Traffic Volumes (2004)
California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...
ABC/2 Method Does not Accurately Predict Cerebral Arteriovenous Malformation Volume.
Roark, Christopher; Vadlamudi, Venu; Chaudhary, Neeraj; Gemmete, Joseph J; Seinfeld, Joshua; Thompson, B Gregory; Pandey, Aditya S
2018-02-01
Stereotactic radiosurgery (SRS) is a treatment option for cerebral arteriovenous malformations (AVMs) to prevent intracranial hemorrhage. The decision to proceed with SRS is usually based on calculated nidal volume. Physicians commonly use the ABC/2 formula, based on digital subtraction angiography (DSA), when counseling patients for SRS. To determine whether AVM volume calculated using the ABC/2 method on DSA is accurate when compared to the exact volume calculated from thin-cut axial sections used for SRS planning. Retrospective search of neurovascular database to identify AVMs treated with SRS from 1995 to 2015. Maximum nidal diameters in orthogonal planes on DSA images were recorded to determine volume using ABC/2 formula. Nidal target volume was extracted from operative reports of SRS. Volumes were then compared using descriptive statistics and paired t-tests. Ninety intracranial AVMs were identified. Median volume was 4.96 cm3 [interquartile range (IQR) 1.79-8.85] with SRS planning methods and 6.07 cm3 (IQR 1.3-13.6) with ABC/2 methodology. Moderate correlation was seen between SRS and ABC/2 (r = 0.662; P ABC/2 (t = -3.2; P = .002). When AVMs were dichotomized based on ABC/2 volume, significant differences remained (t = 3.1, P = .003 for ABC/2 volume ABC/2 volume > 7 cm3). The ABC/2 method overestimates cerebral AVM volume when compared to volumetric analysis from SRS planning software. For AVMs > 7 cm3, the overestimation is even greater. SRS planning techniques were also significantly different than values derived from equations for cones and cylinders. Copyright © 2017 by the Congress of Neurological Surgeons
A New Volume-Of-Fluid Method in Openfoam
DEFF Research Database (Denmark)
Pedersen, Johan Rønby; Eltard-Larsen, Bjarke; Bredmose, Henrik
methods have become quiteadvanced and accurate on structured meshes, there is still room for improvement when it comesto unstructured meshes of the type needed to simulate ows in and around complex geometricstructures. We have recently developed a new geometric VOF algorithm called isoAdvector forgeneral...... limited interface compression, with the new isoAd-vector method. Our test case is a steady 2D stream function wave propagating in a periodicdomain. Based on a series of simulations with different numerical settings, we conclude that theintroduction of isoAdvector has a significant effect on wave...
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Fehlau, P.E.
1993-01-01
The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal
Estimation method for volumes of hot spots created by heavy ions
International Nuclear Information System (INIS)
Kanno, Ikuo; Kanazawa, Satoshi; Kajii, Yuji
1999-01-01
As a ratio of volumes of hot spots to cones, which have the same lengths and bottom radii with the ones of hot spots, a simple and convenient method for estimating the volumes of hot spots is described. This calculation method is useful for the study of damage producing mechanism in hot spots, and is also convenient for the estimation of the electron-hole densities in plasma columns created by heavy ions in semiconductor detectors. (author)
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-07-17
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.
Finite Volume Method for Pricing European Call Option with Regime-switching Volatility
Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM
2018-03-01
In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.
International Nuclear Information System (INIS)
Odano, Ikuo; Takahashi, Naoya; Ohtaki, Hiroh; Noguchi, Eikichi; Hatano, Masayoshi; Yamasaki, Yoshihiro; Nishihara, Mamiko; Ohkubo, Masaki; Yokoi, Takashi.
1993-01-01
We developed a new graphic method using N-isopropyl-p-[ 123 I]iodoamphetamine (IMP) and SPECT of the brain, the graph on which all three parameters, cerebral blood flow, distribution volume (V d ) and delayed count to early count ratio (Delayed/Early ratio), were able to be evaluated simultaneously. The kinetics of 123 I-IMP in the brain was analyzed by a 2-compartment model, and a standard input function was prepared by averaging the time activity curves of 123 I-IMP in arterial blood on 6 patients with small cerebral infarction etc. including 2 normal controls. Being applied this method to the differential diagnosis between Parkinson's disease and progressive supranuclear palsy, we were able to differentiate both with a glance, because the distribution volume of the frontal lobe significantly decreased in Parkinson's disease (Mean±SD; 26±6 ml/g). This method was clinically useful. We think that the distribution volume of 123 I-IMP may reflect its retention mechanism in the brain, and the values are related to amine, especially to dopamine receptors and its metabolism. (author)
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers a general procedure for the measurement of the fast-neutron fluence rate produced by neutron generators utilizing the 3H(d,n)4He reaction. Neutrons so produced are usually referred to as 14-MeV neutrons, but range in energy depending on a number of factors. This test method does not adequately cover fusion sources where the velocity of the plasma may be an important consideration. 1.2 This test method uses threshold activation reactions to determine the average energy of the neutrons and the neutron fluence at that energy. At least three activities, chosen from an appropriate set of dosimetry reactions, are required to characterize the average energy and fluence. The required activities are typically measured by gamma ray spectroscopy. 1.3 The measurement of reaction products in their metastable states is not covered. If the metastable state decays to the ground state, the ground state reaction may be used. 1.4 The values stated in SI units are to be regarded as standard. No oth...
Errors of the backextrapolation method in determination of the blood volume
Schröder, T.; Rösler, U.; Frerichs, I.; Hahn, G.; Ennker, J.; Hellige, G.
1999-01-01
Backextrapolation is an empirical method to calculate the central volume of distribution (for example the blood volume). It is based on the compartment model, which says that after an injection the substance is distributed instantaneously in the central volume with no time delay. The occurrence of recirculation is not taken into account. The change of concentration with time of indocyanine green (ICG) was observed in an in vitro model, in which the volume was recirculating in 60 s and the clearance of the ICG could be varied. It was found that the higher the elimination of ICG, the higher was the error of the backextrapolation method. The theoretical consideration of Schröder et al ( Biomed. Tech. 42 (1997) 7-11) was proved. If the injected substance is eliminated somewhere in the body (i.e. not by radioactive decay), the backextrapolation method produces large errors.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Method of monitoring volume activity of natural radioactive aerosol
International Nuclear Information System (INIS)
Dvorak, V.
1980-01-01
The method of monitoring radioactivity of a aerosol trapped, eg., with a filter is based on counting quasi-coincidences of the RaC-RaC' and ThC-ThC' decay. The first electronic unit counts quasi-coincidences at a time interval proportional to the ThC' half-life while the second electronic unit counts quasi-coincidences at a time interval proportional to the RaC' half-life, reduced by the time interval of the first electronic unit. The quasi-coincidences are evaluated of the RaC-RaC' decay independently of the ThC-ThC' quasi-coincidences and the decay products of the trapped radon and thoron gases are thus offset separately. (J.P.)
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods
Energy Technology Data Exchange (ETDEWEB)
1993-08-01
This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.
Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.
Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M
2016-12-01
Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparative methods for quantifying thyroid volume using planar imaging and SPECT
International Nuclear Information System (INIS)
Zaidi, H.
1996-01-01
Thyroid volume determination using planar imaging is a procedure often performed in routine nuclear medicine, but is hampered by several physical difficulties, in particular by structures which overlie or underlie the organ of interest. SPECT enables improved accuracy over planar imaging in the determination of the volume since it is derived from the 3-D data rather than from a 2-D projection with a certain geometric assumption regarding the thyroid configuration. By using the phantoms of known volume, it was possible to estimate the accuracy of 3 different methods of determining thyroid volume from planar imaging used in clinical routine. The experimental results demonstrate that compared with conventional scintigraphy, thyroid phantom volumes were most accurately determined with SPECT when attenuation and scatter corrections are performed which allows accurate radiation dosimetry in humans without the need for assumptions on organ size or concentrations. Poster 181. (author)
Doerr, Donald F.; Ratliff, Duane A.; Sithole, Joseph; Convertino, Victor A.
2005-01-01
Background: The real time, beat-by-beat, non-invasive determination of stroke volume (SV) is an important parameter in many aerospace related physiologic protocols. In this study, we compared simultaneous estimates of SV calculated from peripheral pulse waveforms with a more conventional non-invasive technique. Methods: Using a prospective, randomized blinded protocol, ten males and nine females completed 12-mm tilt table protocols. The relative change (%(Delta)) in beat-to-beat SV was estimated non-invasively from changes in pulse waveforms measured by application of infrared finger photoplethysmography (IFP) with a Portapres(Registered TradeMark) blood pressure monitoring device and by thoracic impedance cardiography (TIC). The %(Delta) SV values were calculated from continuous SV measurements in the supine posture and over the first 10 s (T1), second 10 s (T2), and 3.5 minutes (T3) of 80deg head-up tilt (HUT). Results: The average %(Delta) SV measured by IFP at T1 (-11.7 +/- 3.7 %) was statistically less (P measured by TIC at T1 (-21.7 +/- 3.1 %), while the average %(Delta) SV measured by 1FF at T2 (-16.2 +/- 3.9 %) and T3 (-19.1 +/- 3.8 %) were not statistically distinguishable (P > or = 0.322) than the average %(Delta) SV measured by TIC at T2 (-21.8 +/- 2.5 %), and T3 (-22.6 +/- 2.9 %). Correlation coefficients (r(sup 2)) between IFP and TIC were 0.117 (T1), 0.387 (T2), and 0.7 18 (T3). Conclusion: IFP provides beat-to-beat (real time) assessment of %(Delta) SV after 20 sec of transition to an orthostatic challenge that is comparable to the commonly accepted TIC. Our data support the notion that IFP technology which has flown during space missions can be used to accurately assess physiological status and countermeasure effectiveness for orth static problems that may arise in astronauts after space flight. While the peripherally measured IFP response is slightly delayed, the ease of implementing this monitor in the field is advantageous.
Scintigraphic method for evaluating reductions in local blood volumes in human extremities
DEFF Research Database (Denmark)
Blønd, L; Madsen, Jan Lysgård
2000-01-01
in the experiment. Evaluation of one versus two scintigraphic projections, trials for assessment of the reproducibility, a comparison of the scintigraphic method with a water-plethysmographic method and registration of the fractional reduction in blood volume caused by exsanguination as a result of simple elevation......% in the lower limb experiment and 6% in the upper limb experiment. We found a significant relation (r = 0.42, p = 0.018) between the results obtained by the scintigraphic method and the plethysmographic method. In fractions, a mean reduction in blood volume of 0.49+0.14 (2 SD) was found after 1 min of elevation......We introduce a new method for evaluating reductions in local blood volumes in extremities, based on the combined use of autologue injection of 99mTc-radiolabelled erythrocytes and clamping of the limb blood flow by the use of a tourniquet. Twenty-two healthy male volunteers participated...
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
International Nuclear Information System (INIS)
Jeong, Hyun Jo
1998-01-01
A nondestructive ultrasonic technique is presented for estimating the reinforcement volume fractions of particulate composites. The proposed technique employs a theoretical model which accounts for composite microstructures, together with a measurement of ultrasonic velocity to determine the reinforcement volume fractions. The approach is used for a wide range of SiC particulate reinforced Al matrix (SiC p /AI) composites. The method is considered to be reliable in determining the reinforcement volume fractions. The technique could be adopted in a production unit for the quality assessment of the metal matrix particulate composite extrusions
Impedance ratio method for urine conductivity-invariant estimation of bladder volume
Directory of Open Access Journals (Sweden)
Thomas Schlebusch
2014-09-01
Full Text Available Non-invasive estimation of bladder volume could help patients with impaired bladder volume sensation to determine the right moment for catheterisation. Continuous, non-invasive impedance measurement is a promising technology in this scenario, although influences of body posture and unknown urine conductivity limit wide clinical use today. We studied impedance changes related to bladder volume by simulation, in-vitro and in-vivo measurements with pigs. In this work, we present a method to reduce the influence of urine conductivity to cystovolumetry and bring bioimpedance cystovolumetry closer to a clinical application.
Directory of Open Access Journals (Sweden)
Hendry Sakke Tira
2016-05-01
Full Text Available Energy supply is a crucial issue in the world in the last few years. The increase in energy demand caused by population growth and resource depletion of world oil reserves provides determination to produce and to use renewable energies. One of the them is biogas. However, until now the use of biogas has not yet been maximized because of its poor purity. According to the above problem, the research has been carried out using the method of water absorption. Under this method it is expected that the rural community is able to apply it. Therefore, their economy and productivity can be increased. This study includes variations of absorbing water volume (V and input biogas volume flow rate (Q. Raw biogas which is flowed into the absorbent will be analyzed according to the determined absorbing water volume and input biogas volume rate. Improvement on biogas composition through the biogas purification method was obtained. The level of CO2 and H2S was reduced significantly specifically in the early minutes of purification process. On the other hand, the level of CH4 was increased improving the quality of raw biogas. However, by the time of biogas purification the composition of purified biogas was nearly similar to the raw biogas. The main reason for this result was an increasing in pH of absorbent. It was shown that higher water volume and slower biogas volume rate obtained better results in reducing the CO2 and H2S and increasing CH4 compared to those of lower water volume and higher biogas volume rate respectively. The purification method has a good promising in improving the quality of raw biogas and has advantages as it is cheap and easy to be operated.
Directory of Open Access Journals (Sweden)
Hendry Sakke Tira
2014-10-01
Full Text Available Energy supply is a crucial issue in the world in the last few years. The increase in energy demand caused by population growth and resource depletion of world oil reserves provides determination to produce and to use renewable energies. One of the them is biogas. However, until now the use of biogas has not yet been maximized because of its poor purity. According to the above problem, the research has been carried out using the method of water absorption. Under this method it is expected that the rural community is able to apply it. Therefore, their economy and productivity can be increased. This study includes variations of absorbing water volume (V and input biogas volume flow rate (Q. Raw biogas which is flowed into the absorbent will be analyzed according to the determined absorbing water volume and input biogas volume rate. Improvement on biogas composition through the biogas purification method was obtained. The level of CO2 and H2S was reduced significantly specifically in the early minutes of purification process. On the other hand, the level of CH4 was increased improving the quality of raw biogas. However, by the time of biogas purification the composition of purified biogas was nearly similar to the raw biogas. The main reason for this result was an increasing in pH of absorbent. It was shown that higher water volume and slower biogas volume rate obtained better results in reducing the CO2 and H2S and increasing CH4 compared to those of lower water volume and higher biogas volume rate respectively. The purification method has a good promising in improving the quality of raw biogas and has advantages as it is cheap and easy to be operated.
Fast multiview three-dimensional reconstruction method using cost volume filtering
Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.
2014-03-01
As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.
Method of phase-Doppler anemometry free from the measurement-volume effect.
Qiu, H; Hsu, C T
1999-05-01
A novel method is developed to improve the accuracy of particle sizing in laser phase-Doppler anemometry (PDA). In this method the vector sum of refractive and reflective rays is taken into consideration in describing a dual-mechanism-scattering model caused by a nonuniformly illuminated PDA measurement volume. The constraint of the single-mechanism-scattering model in the conventional PDA is removed. As a result the error caused by the measurement-volume effect, which consists of a Gaussian-beam defect and a slit effect, can be eliminated. This new method can be easily implemented with minimal modification of the conventional PDA system. The results of simulation based on the generalized Lorenz-Mie theory show that the new method can provide a PDA system free from the measurement-volume effect.
Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids
Directory of Open Access Journals (Sweden)
Sudi Mungkasi
2016-01-01
Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.
A novel OPC method to reduce mask volume with yield-aware dissection
International Nuclear Information System (INIS)
Xie Chunlei; Chen Ye; Shi Zheng
2013-01-01
Growing data volume of masks tremendously increases manufacture cost. The cost increase is partially due to the complicated optical proximity corrections applied on mask design. In this paper, a yield-aware dissection method is presented. Based on the recognition of yield related mask context, the dissection result provides sufficient degrees of freedom to keep fidelity on critical sites while still retaining the frugality of modified designs. Experiments show that the final mask volume using the new method is reduced to about 50% of the conventional method. (semiconductor technology)
Using gas blow methods to realize accurate volume measurement of radioactivity liquid
International Nuclear Information System (INIS)
Zhang Caiyun
2010-01-01
For liquid which has radioactivity, Realized the accurate volume measurement uncertainty less than 0.2% (k=2) by means of gas blow methods presented in the 'American National Standard-Nuclear Material Control-Volume Calibration Methods(ANSI N15.19-1989)' and the 'ISO Committee Drafts (ISO/TC/85/SC 5N 282 )' and Explored a set methods of Data Processing. In the article, the major problems is to solve data acquisition and function foundation and measurement uncertainty estimate. (authors)
A lattice Boltzmann coupled to finite volumes method for solving phase change problems
Directory of Open Access Journals (Sweden)
El Ganaoui Mohammed
2009-01-01
Full Text Available A numerical scheme coupling lattice Boltzmann and finite volumes approaches has been developed and qualified for test cases of phase change problems. In this work, the coupled partial differential equations of momentum conservation equations are solved with a non uniform lattice Boltzmann method. The energy equation is discretized by using a finite volume method. Simulations show the ability of this developed hybrid method to model the effects of convection, and to predict transfers. Benchmarking is operated both for conductive and convective situation dominating solid/liquid transition. Comparisons are achieved with respect to available analytical solutions and experimental results.
International Nuclear Information System (INIS)
Xi Li-Ying; Chen Huan-Ming; Zheng Fu; Gao Hua; Tong Yang; Ma Zhi
2015-01-01
Three-dimensional simulations of ferroelectric hysteresis and butterfly loops are carried out based on solving the time dependent Ginzburg–Landau equations using a finite volume method. The influence of externally mechanical loadings with a tensile strain and a compressive strain on the hysteresis and butterfly loops is studied numerically. Different from the traditional finite element and finite difference methods, the finite volume method is applicable to simulate the ferroelectric phase transitions and properties of ferroelectric materials even for more realistic and physical problems. (paper)
Energy Technology Data Exchange (ETDEWEB)
Båth, Magnus, E-mail: magnus.bath@vgregion.se; Svalkvist, Angelica [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45, Sweden and Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg SE-413 45 (Sweden); Söderman, Christina [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45 (Sweden)
2014-10-15
Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.
International Nuclear Information System (INIS)
Båth, Magnus; Svalkvist, Angelica; Söderman, Christina
2014-01-01
Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis
Energy Technology Data Exchange (ETDEWEB)
Le Dez, V; Lallemand, M [Ecole Nationale Superieure de Mecanique et d` Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M; Charette, A [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees
1997-12-31
The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.
Energy Technology Data Exchange (ETDEWEB)
Le Dez, V.; Lallemand, M. [Ecole Nationale Superieure de Mecanique et d`Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M.; Charette, A. [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees
1996-12-31
The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.
An innovative method of planning and displaying flap volume in DIEP flap breast reconstructions.
Hummelink, S; Verhulst, Arico C; Maal, Thomas J J; Hoogeveen, Yvonne L; Schultze Kool, Leo J; Ulrich, Dietmar J O
2017-07-01
Determining the ideal volume of the harvested flap to achieve symmetry in deep inferior epigastric artery perforator (DIEP) flap breast reconstructions is complex. With preoperative imaging techniques such as 3D stereophotogrammetry and computed tomography angiography (CTA) available nowadays, we can combine information to preoperatively plan the optimal flap volume to be harvested. In this proof-of-concept, we investigated whether projection of a virtual flap planning onto the patient's abdomen using a projection method could result in harvesting the correct flap volume. In six patients (n = 9 breasts), 3D stereophotogrammetry and CTA data were combined from which a virtual flap planning was created comprising perforator locations, blood vessel trajectory and flap size. All projected perforators were verified with Doppler ultrasound. Intraoperative flap measurements were collected to validate the determined flap delineation volume. The measured breast volume using 3D stereophotogrammetry was 578 ± 127 cc; on CTA images, 527 ± 106 cc flap volumes were planned. The nine harvested flaps weighed 533 ± 109 g resulting in a planned versus harvested flap mean difference of 5 ± 27 g (flap density 1.0 g/ml). In 41 out of 42 projected perforator locations, a Doppler signal was audible. This proof-of-concept shows in small numbers that flap volumes can be included into a virtual DIEP flap planning, and transferring the virtual planning to the patient through a projection method results in harvesting approximately the same volume during surgery. In our opinion, this innovative approach is the first step in consequently achieving symmetric breast volumes in DIEP flap breast reconstructions. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Chen, Guang-Hong; Li, Yinsheng
2015-08-01
images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.
DEFF Research Database (Denmark)
Hattel, Jesper; Hansen, Preben
1995-01-01
This paper presents a novel control volume based FD method for solving the equilibrium equations in terms of displacements, i.e. the generalized Navier equations. The method is based on the widely used cv-FDM solution of heat conduction and fluid flow problems involving a staggered grid formulati....... The resulting linear algebraic equations are solved by line-Gauss-Seidel....
New Internet search volume-based weighting method for integrating various environmental impacts
Energy Technology Data Exchange (ETDEWEB)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
2016-01-15
Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.
New Internet search volume-based weighting method for integrating various environmental impacts
International Nuclear Information System (INIS)
Ji, Changyoon; Hong, Taehoon
2016-01-01
Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.
A simple, quantitative method using alginate gel to determine rat colonic tumor volume in vivo.
Irving, Amy A; Young, Lindsay B; Pleiman, Jennifer K; Konrath, Michael J; Marzella, Blake; Nonte, Michael; Cacciatore, Justin; Ford, Madeline R; Clipson, Linda; Amos-Landgraf, James M; Dove, William F
2014-04-01
Many studies of the response of colonic tumors to therapeutics use tumor multiplicity as the endpoint to determine the effectiveness of the agent. These studies can be greatly enhanced by accurate measurements of tumor volume. Here we present a quantitative method to easily and accurately determine colonic tumor volume. This approach uses a biocompatible alginate to create a negative mold of a tumor-bearing colon; this mold is then used to make positive casts of dental stone that replicate the shape of each original tumor. The weight of the dental stone cast correlates highly with the weight of the dissected tumors. After refinement of the technique, overall error in tumor volume was 16.9% ± 7.9% and includes error from both the alginate and dental stone procedures. Because this technique is limited to molding of tumors in the colon, we utilized the Apc(Pirc/+) rat, which has a propensity for developing colonic tumors that reflect the location of the majority of human intestinal tumors. We have successfully used the described method to determine tumor volumes ranging from 4 to 196 mm³. Alginate molding combined with dental stone casting is a facile method for determining tumor volume in vivo without costly equipment or knowledge of analytic software. This broadly accessible method creates the opportunity to objectively study colonic tumors over time in living animals in conjunction with other experiments and without transferring animals from the facility where they are maintained.
Gastrointestinal tract volume measurement method using a compound eye type endoscope
Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko
2015-03-01
We propose an intestine volume measurement method using a compound eye type endoscope. This method aims at assessment of the gastrointestinal function. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. Such diseases are called functional gastrointestinal disorder, for example, functional dyspepsia, and irritable bowel syndrome. One of the major factors for these diseases is abnormal gastrointestinal motility. For the diagnosis of the gastrointestinal tract, both aspects of organic and functional assessment is important. While endoscopic diagnosis is essential for assessment of organic abnormalities, three-dimensional information is required for assessment of the functional abnormalities. Thus, we proposed the three dimensional endoscope system using compound eye. In this study, we forces on the volume of gastrointestinal tract. The volume of the gastrointestinal tract is thought to related its function. In our system, we use a compound eye type endoscope system to obtain three-dimensional information of the tract. The volume can be calculated by integrating the slice data of the intestine tract shape using the obtained three-dimensional information. First, we evaluate the proposed method by known-shape tube. Then, we confirm that the proposed method can measure the tract volume using the tract simulated model. Our system can assess the wall of gastrointestinal tract directly in a three-dimensional manner. Our system can be used for examination of gastric morphological and functional abnormalities.
Båth, Magnus; Söderman, Christina; Svalkvist, Angelica
2014-10-01
The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.
Visualizing Volume to Help Students Understand the Disk Method on Calculus Integral Course
Tasman, F.; Ahmad, D.
2018-04-01
Many research shown that students have difficulty in understanding the concepts of integral calculus. Therefore this research is interested in designing a classroom activity integrated with design research method to assist students in understanding the integrals concept especially in calculating the volume of rotary objects using disc method. In order to support student development in understanding integral concepts, this research tries to use realistic mathematical approach by integrating geogebra software. First year university student who takes a calculus course (approximately 30 people) was chosen to implement the classroom activity that has been designed. The results of retrospective analysis show that visualizing volume of rotary objects using geogebra software can assist the student in understanding the disc method as one way of calculating the volume of a rotary object.
Directory of Open Access Journals (Sweden)
Yankui Sun
2016-03-01
Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Volume of Structures in the Fetal Brain Measured with a New Semiautomated Method.
Ber, R; Hoffman, D; Hoffman, C; Polat, A; Derazne, E; Mayer, A; Katorza, E
2017-11-01
Measuring the volume of fetal brain structures is challenging due to fetal motion, low resolution, and artifacts caused by maternal tissue. Our aim was to introduce a new, simple, Matlab-based semiautomated method to measure the volume of structures in the fetal brain and present normal volumetric curves of the structures measured. The volume of the supratentorial brain, left and right hemispheres, cerebellum, and left and right eyeballs was measured retrospectively by the new semiautomated method in MR imaging examinations of 94 healthy fetuses. Four volume ratios were calculated. Interobserver agreement was calculated with the intraclass correlation coefficient, and a Bland-Altman plot was drawn for comparison of manual and semiautomated method measurements of the supratentorial brain. We present normal volumetric curves and normal percentile values of the structures measured according to gestational age and of the ratios between the cerebellum and the supratentorial brain volume and the total eyeball and the supratentorial brain volume. Interobserver agreement was good or excellent for all structures measured. The Bland-Altman plot between manual and semiautomated measurements showed a maximal relative difference of 7.84%. We present a technologically simple, reproducible method that can be applied prospectively and retrospectively on any MR imaging protocol, and we present normal volumetric curves measured. The method shows results like manual measurements while being less time-consuming and user-dependent. By applying this method on different cranial and extracranial structures, anatomic and pathologic, we believe that fetal volumetry can turn from a research tool into a practical clinical one. © 2017 by American Journal of Neuroradiology.
Endoscopic clipping for gastrointestinal tumors. A method to define the target volume more precisely
International Nuclear Information System (INIS)
Riepl, M.; Klautke, G.; Fehr, R.; Fietkau, R.; Pietsch, A.
2000-01-01
Background: In many cases it is not possible to exactly define the extension of carcinoma of the gastrointestinal tract with the help of computertomography scans made for 3-D-radiation treatment planning. Consequently, the planning of external beam radiotherapy is made more difficult for the gross tumor volume as well as, in some cases, also for the clinical target volume. Patients and Methods: Eleven patients with macrosocpic tumors (rectal cancer n = 5, cardiac cancer n = 6) were included. Just before 3-D planning, the oral and aboral border of the tumor was marked endoscopically with hemoclips. Subsequently, CT scans for radiotherapy planning were made and the clinical target volume was defined. Five to 6 weeks thereafter, new CT scans were done to define the gross tumor volume for boost planning. Two investigators independently assessed the influence of the hemoclips on the different planning volumes, and whether the number of clips was sufficient to define the gross tumor volume. Results: In all patients, the implantation of the clips was done without complications. Start of radiotherapy was not delayed. With the help of the clips it was possible to exactly define the position and the extension of the primary tumor. The clinical target volume was modified according to the position of the clips in 5/11 patients; the gross tumor volume was modified in 7/11 patients. The use of the clips made the documentation and verification of the treatment portals by the simulator easier. Moreover, the clips helped the surgeon to define the primary tumor region following marked regression after neoadjuvant therapy in 3 patients. Conclusions: Endoscopic clipping of gastrointestinal tumors helps to define the tumor volumes more precisely in radiation therapy. The clips are easily recognized on the portal films and, thus, contribute to quality control. (orig.) [de
Directory of Open Access Journals (Sweden)
Agus Supriatna
2017-11-01
Full Text Available The tourism sector is one of the contributors of foreign exchange is quite influential in improving the economy of Indonesia. The development of this sector will have a positive impact, including employment opportunities and opportunities for entrepreneurship in various industries such as adventure tourism, craft or hospitality. The beauty and natural resources owned by Indonesia become a tourist attraction for domestic and foreign tourists. One of the many tourist destination is the island of Bali. The island of Bali is not only famous for its natural, cultural diversity and arts but there are also add the value of tourism. In 2015 the increase in the number of tourist arrivals amounted to 6.24% from the previous year. In improving the quality of services, facing a surge of visitors, or prepare a strategy in attracting tourists need a prediction of arrival so that planning can be more efficient and effective. This research used Holt Winter's method and Seasonal Autoregressive Integrated Moving Average (SARIMA method to predict tourist arrivals. Based on data of foreign tourist arrivals who visited the Bali island in January 2007 until June 2016, the result of Holt Winter's method with parameter values α=0.1 ,β=0.1 ,γ=0.3 has an error MAPE is 6,171873. While the result of SARIMA method with (0,1,1〖(1,0,0〗12 model has an error MAPE is 5,788615 and it can be concluded that SARIMA method is better. Keywords: Foreign Tourist, Prediction, Bali Island, Holt-Winter’s, SARIMA.
Weibull statistics effective area and volume in the ball-on-ring testing method
DEFF Research Database (Denmark)
Frandsen, Henrik Lund
2014-01-01
The ball-on-ring method is together with other biaxial bending methods often used for measuring the strength of plates of brittle materials, because machining defects are remote from the high stresses causing the failure of the specimens. In order to scale the measured Weibull strength...... to geometries relevant for the application of the material, the effective area or volume for the test specimen must be evaluated. In this work analytical expressions for the effective area and volume of the ball-on-ring test specimen is derived. In the derivation the multiaxial stress field has been accounted...
Hybrid Finite Element and Volume Integral Methods for Scattering Using Parametric Geometry
DEFF Research Database (Denmark)
Volakis, John L.; Sertel, Kubilay; Jørgensen, Erik
2004-01-01
n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integral method are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....
A novel method for the evaluation of uncertainty in dose-volume histogram computation.
Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas
2008-03-15
Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.
A method for bubble volume calculating in vertical two-phase flow
International Nuclear Information System (INIS)
Wang, H Y; Dong, F
2009-01-01
The movement of bubble is a basic subject in gas-liquid two-phase flow research. A method for calculating bubble volume which is one of the most important characters in bubble motion research was proposed. A suit of visualized experimental device was designed and set up. Single bubble rising in stagnant liquid in a rectangular tank was studied using the high-speed video system. Bubbles generated by four orifice with different diameter (1mm, 2mm, 3mm, 4mm) were recorded respectively. Sequences of recorded high-speed images were processed by digital image processing method, such as image noise remove, binary image transform, bubble filling, and so on. then, Several parameters could be obtained from the processed image. Bubble area, equivalent diameter, bubble velocity, bubble acceleration are all indispensable in bubble volume calculating. In order to get the force balance equation, forces that work on bubble along vertical direction, including drag force, virtual mass force, buoyancy, gravity and liquid thrust, were analyzed. Finally, the bubble volume formula could be derived from the force balance equation and bubble parameters. Examples were given to shown the computing process and results. Comparison of the bubble volume calculated by geomettic method and the present method have shown the superiority of the proposed method in this paper.
International Nuclear Information System (INIS)
Endo, T.; Sato, S.; Yamamoto, A.
2012-01-01
Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the 134 Cs/ 137 Cs ratio method for measured radioactivities of 134 Cs and 137 Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured 134 Cs/ 137 Cs ratio from the contaminated soil is 0.996±0.07 as of March 11, 2011. Based on the 134 Cs/ 137 Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2±1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of 134 Cs/ 137 Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on 134 Cs/ 137 Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)
/sup 3/H-dextran method for measurements of the blood volume in the rat choroid
Energy Technology Data Exchange (ETDEWEB)
Matsusaka, T [Osaka Prefectural Center for Adult Diseases (Japan); Morimoto, K; Kikkawa, Y
1980-01-01
A new method was developed using /sup 3/H-dextran for measuring the blood volume in the choroid. Under pentobarbital-anesthesia, albino rats weighing 200 grams were perfused through the left ventricle with a 2.5 percent glutaraldehyde solution containing the radioactive dextran. The procedure allowed exchange of the choroidal blood with the /sup 3/H-dextran solution with a simultaneous fixation of the choroid. The blood volume in the choroid was calculated from the radioactivity count, which is estimated to be 1.690 x 10/sup -4/ ml per mg wet weight and 5.070 x 10/sup -4/ ml per mg dry weight. Epinephrine subconjunctivally injected diminished the blood volume in the choroid by 68 percent. Pretreatment with lidocaine almost nullified the effect of epinephrine. Applicability of this method to the analytical study of the choroidal circulation is discussed.
3H-dextran method for measurements of the blood volume in the rat choroid
International Nuclear Information System (INIS)
Matsusaka, Toshihiko; Morimoto, Kazuhiro; Kikkawa, Yoshizo.
1980-01-01
A new method was developed using 3 H-dextran for measuring the blood volume in the choroid. Under pentobarbital-anesthesia, albino rats weighing 200 grams were perfused through the left ventricle with a 2.5 percent glutaraldehyde solution containing the radioactive dextran. The procedure allowed exchange of the choroidal blood with the 3 H-dextran solution with a simultaneous fixation of the choroid. The blood volume in the choroid was calculated from the radioactivity count, which is estimated to be 1.690 x 10 -4 ml per mg wet weight and 5.070 x 10 -4 ml per mg dry weight. Epinephrine subconjunctivally injected diminished the blood volume in the choroid by 68 percent. Pretreatment with lidocaine almost nullified the effect of epinephrine. Applicability of this method to the analytical study of the choroidal circulation is discussed. (author)
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
Ragagnin, Marilia Nagata; Gorman, Daniel; McCarthy, Ian Donald; Sant'Anna, Bruno Sampaio; de Castro, Cláudio Campi; Turra, Alexander
2018-01-11
Obtaining accurate and reproducible estimates of internal shell volume is a vital requirement for studies into the ecology of a range of shell-occupying organisms, including hermit crabs. Shell internal volume is usually estimated by filling the shell cavity with water or sand, however, there has been no systematic assessment of the reliability of these methods and moreover no comparison with modern alternatives, e.g., computed tomography (CT). This study undertakes the first assessment of the measurement reproducibility of three contrasting approaches across a spectrum of shell architectures and sizes. While our results suggested a certain level of variability inherent for all methods, we conclude that a single measure using sand/water is likely to be sufficient for the majority of studies. However, care must be taken as precision may decline with increasing shell size and structural complexity. CT provided less variation between repeat measures but volume estimates were consistently lower compared to sand/water and will need methodological improvements before it can be used as an alternative. CT indicated volume may be also underestimated using sand/water due to the presence of air spaces visible in filled shells scanned by CT. Lastly, we encourage authors to clearly describe how volume estimates were obtained.
The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment
Energy Technology Data Exchange (ETDEWEB)
Long, Wen; Yang, Zhaoqing; Copping, Andrea E.; Jung, Ki Won; Deng, Zhiqun
2015-10-28
: As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3D sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.
Directory of Open Access Journals (Sweden)
Joko Siswantoro
2014-11-01
Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.
ACARP Project C10059. ACARP manual of modern coal testing methods. Volume 1: The manual
Energy Technology Data Exchange (ETDEWEB)
Sakurovs, R.; Creelman, R.; Pohl, J.; Juniper, L. [CSIRO Energy Technology, Sydney, NSW (Australia)
2002-07-01
The Manual summarises the purpose, applicability, and limitations of a range of standard and modern coal testing methods that have potential to assist the coal company technologist to better evaluate coal performance. The first volume sets out the Modern Coal Testing Methods in summarised form that can be used as a quick guide to practitioners to assist in selecting the best technique to solve their problems.
Precision of a new bedside method for estimation of the circulating blood volume
DEFF Research Database (Denmark)
Christensen, P; Eriksen, B; Henneberg, S W
1993-01-01
The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before and a......, determination of CBV can be performed with an amount of CO that gives rise to a harmless increase in the carboxyhemoglobin concentration.(ABSTRACT TRUNCATED AT 250 WORDS)...
New finite volume methods for approximating partial differential equations on arbitrary meshes
International Nuclear Information System (INIS)
Hermeline, F.
2008-12-01
This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)
DEFF Research Database (Denmark)
Thorborg, Jesper
, however, is constituted by the implementation of the $J_2$ flow theory in the control volume method. To apply the control volume formulation on the process of hardening concrete viscoelastic stress-strain models has been examined in terms of various rheological models. The generalized 3D models are based...... on two different suggestions in the literature, that is compressible or incompressible behaviour of the viscos response in the dashpot element. Numerical implementation of the models has shown very good agreement with corresponding analytical solutions. The viscoelastic solid mechanical model is used...
Simulation of pore pressure accumulation under cyclic loading using Finite Volume Method
DEFF Research Database (Denmark)
Tang, Tian; Hededal, Ole
2014-01-01
This paper presents a finite volume implementation of a porous, nonlinear soil model capable of simulating pore pressure accumulation under cyclic loading. The mathematical formulations are based on modified Biot’s coupled theory by substituting the original elastic constitutive model...... with an advanced elastoplastic model suitable for describing monotonic as well as cyclic loading conditions. The finite volume method is applied to discretize these formulations. The resulting set of coupled nonlinear algebraic equations are then solved by a ’segregated’ solution procedure. An efficient return...
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Directory of Open Access Journals (Sweden)
Arheden Håkan
2011-04-01
Full Text Available Abstract Background Functional and morphological changes of the heart influence blood flow patterns. Therefore, flow patterns may carry diagnostic and prognostic information. Three-dimensional, time-resolved, three-directional phase contrast cardiovascular magnetic resonance (4D PC-CMR can image flow patterns with unique detail, and using new flow visualization methods may lead to new insights. The aim of this study is to present and validate a novel visualization method with a quantitative potential for blood flow from 4D PC-CMR, called Volume Tracking, and investigate if Volume Tracking complements particle tracing, the most common visualization method used today. Methods Eight healthy volunteers and one patient with a large apical left ventricular aneurysm underwent 4D PC-CMR flow imaging of the whole heart. Volume Tracking and particle tracing visualizations were compared visually side-by-side in a visualization software package. To validate Volume Tracking, the number of particle traces that agreed with the Volume Tracking visualizations was counted and expressed as a percentage of total released particles in mid-diastole and end-diastole respectively. Two independent observers described blood flow patterns in the left ventricle using Volume Tracking visualizations. Results Volume Tracking was feasible in all eight healthy volunteers and in the patient. Visually, Volume Tracking and particle tracing are complementary methods, showing different aspects of the flow. When validated against particle tracing, on average 90.5% and 87.8% of the particles agreed with the Volume Tracking surface in mid-diastole and end-diastole respectively. Inflow patterns in the left ventricle varied between the subjects, with excellent agreement between observers. The left ventricular inflow pattern in the patient differed from the healthy subjects. Conclusion Volume Tracking is a new visualization method for blood flow measured by 4D PC-CMR. Volume Tracking
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
A simple method to estimate restoration volume as a possible predictor for tooth fracture.
Sturdevant, J R; Bader, J D; Shugars, D A; Steet, T C
2003-08-01
Many dentists cite the fracture risk posed by a large existing restoration as a primary reason for their decision to place a full-coverage restoration. However, there is poor agreement among dentists as to when restoration placement is necessary because of the inability to make objective measurements of restoration size. The purpose of this study was to compare a new method to estimate restoration volumes in posterior teeth with analytically determined volumes. True restoration volume proportion (RVP) was determined for 96 melamine typodont teeth: 24 each of maxillary second premolar, mandibular second premolar, maxillary first molar, and mandibular first molar. Each group of 24 was subdivided into 3 groups to receive an O, MO, or MOD amalgam preparation design. Each preparation design was further subdivided into 4 groups of increasingly larger size. The density of amalgam used was calculated according to ANSI/ADA Specification 1. The teeth were weighed before and after restoration with amalgam. Restoration weight was calculated, and the density of amalgam was used to calculate restoration volume. A liquid pycnometer was used to calculate coronal volume after sectioning the anatomic crown from the root horizontally at the cementoenamel junction. True RVP was calculated by dividing restoration volume by coronal volume. An occlusal photograph and a bitewing radiograph were made of each restored tooth to provide 2 perpendicular views. Each image was digitized, and software was used to measure the percentage of the anatomic crown restored with amalgam. Estimated RVP was calculated by multiplying the percentage of the anatomic crown restored from the 2 views together. Pearson correlation coefficients were used to compare estimated RVP with true RVP. The Pearson correlation coefficient of true RVP with estimated RVP was 0.97 overall (Pvolume of restorative material in coronal tooth structure. The fact that it can be done in a nondestructive manner makes it attractive for
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
A New Class of Non-Linear, Finite-Volume Methods for Vlasov Simulation
International Nuclear Information System (INIS)
Banks, J.W.; Hittinger, J.A.
2010-01-01
Methods for the numerical discretization of the Vlasov equation should efficiently use the phase space discretization and should introduce only enough numerical dissipation to promote stability and control oscillations. A new high-order, non-linear, finite-volume algorithm for the Vlasov equation that discretely conserves particle number and controls oscillations is presented. The method is fourth-order in space and time in well-resolved regions, but smoothly reduces to a third-order upwind scheme as features become poorly resolved. The new scheme is applied to several standard problems for the Vlasov-Poisson system, and the results are compared with those from other finite-volume approaches, including an artificial viscosity scheme and the Piecewise Parabolic Method. It is shown that the new scheme is able to control oscillations while preserving a higher degree of fidelity of the solution than the other approaches.
International Nuclear Information System (INIS)
Nagai, Katsuaki; Ushijima, Satoru
2010-01-01
A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.
Nagai, Katsuaki; Ushijima, Satoru
2010-06-01
A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.
A spatial discretization of the MHD equations based on the finite volume - spectral method
International Nuclear Information System (INIS)
Miyoshi, Takahiro
2000-05-01
Based on the finite volume - spectral method, we present new discretization formulae for the spatial differential operators in the full system of the compressible MHD equations. In this approach, the cell-centered finite volume method is adopted in a bounded plane (poloidal plane), while the spectral method is applied to the differential with respect to the periodic direction perpendicular to the poloidal plane (toroidal direction). Here, an unstructured grid system composed of the arbitrary triangular elements is utilized for constructing the cell-centered finite volume method. In order to maintain the divergence free constraint of the magnetic field numerically, only the poloidal component of the rotation is defined at three edges of the triangular element. This poloidal component is evaluated under the assumption that the toroidal component of the operated vector times the radius, RA φ , is linearly distributed in the element. The present method will be applied to the nonlinear MHD dynamics in an realistic torus geometry without the numerical singularities. (author)
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Energy Technology Data Exchange (ETDEWEB)
Yassi, Nawaf; Campbell, Bruce C.V.; Davis, Stephen M.; Bivard, Andrew [The University of Melbourne, Departments of Medicine and Neurology, Melbourne Brain Centre rate at The Royal Melbourne Hospital, Parkville, Victoria (Australia); Moffat, Bradford A.; Steward, Christopher; Desmond, Patricia M. [The University of Melbourne, Department of Radiology, The Royal Melbourne Hospital, Parkville (Australia); Churilov, Leonid [The University of Melbourne, The Florey Institute of Neurosciences and Mental Health, Parkville (Australia); Parsons, Mark W. [University of Newcastle and Hunter Medical Research Institute, Priority Research Centre for Translational Neuroscience and Mental Health, Newcastle (Australia)
2015-07-15
Longitudinal brain volume changes have been investigated in a number of cerebral disorders as a surrogate marker of clinical outcome. In stroke, unique methodological challenges are posed by dynamic structural changes occurring after onset, particularly those relating to the infarct lesion. We aimed to evaluate agreement between different analysis methods for the measurement of post-stroke brain volume change, and to explore technical challenges inherent to these methods. Fifteen patients with anterior circulation stroke underwent magnetic resonance imaging within 1 week of onset and at 1 and 3 months. Whole-brain as well as grey- and white-matter volume were estimated separately using both an intensity-based and a surface watershed-based algorithm. In the case of the intensity-based algorithm, the analysis was also performed with and without exclusion of the infarct lesion. Due to the effects of peri-infarct edema at the baseline scan, longitudinal volume change was measured as percentage change between the 1 and 3-month scans. Intra-class and concordance correlation coefficients were used to assess agreement between the different analysis methods. Reduced major axis regression was used to inspect the nature of bias between measurements. Overall agreement between methods was modest with strong disagreement between some techniques. Measurements were variably impacted by procedures performed to account for infarct lesions. Improvements in volumetric methods and consensus between methodologies employed in different studies are necessary in order to increase the validity of conclusions derived from post-stroke cerebral volumetric studies. Readers should be aware of the potential impact of different methods on study conclusions. (orig.)
International Nuclear Information System (INIS)
Yassi, Nawaf; Campbell, Bruce C.V.; Davis, Stephen M.; Bivard, Andrew; Moffat, Bradford A.; Steward, Christopher; Desmond, Patricia M.; Churilov, Leonid; Parsons, Mark W.
2015-01-01
Longitudinal brain volume changes have been investigated in a number of cerebral disorders as a surrogate marker of clinical outcome. In stroke, unique methodological challenges are posed by dynamic structural changes occurring after onset, particularly those relating to the infarct lesion. We aimed to evaluate agreement between different analysis methods for the measurement of post-stroke brain volume change, and to explore technical challenges inherent to these methods. Fifteen patients with anterior circulation stroke underwent magnetic resonance imaging within 1 week of onset and at 1 and 3 months. Whole-brain as well as grey- and white-matter volume were estimated separately using both an intensity-based and a surface watershed-based algorithm. In the case of the intensity-based algorithm, the analysis was also performed with and without exclusion of the infarct lesion. Due to the effects of peri-infarct edema at the baseline scan, longitudinal volume change was measured as percentage change between the 1 and 3-month scans. Intra-class and concordance correlation coefficients were used to assess agreement between the different analysis methods. Reduced major axis regression was used to inspect the nature of bias between measurements. Overall agreement between methods was modest with strong disagreement between some techniques. Measurements were variably impacted by procedures performed to account for infarct lesions. Improvements in volumetric methods and consensus between methodologies employed in different studies are necessary in order to increase the validity of conclusions derived from post-stroke cerebral volumetric studies. Readers should be aware of the potential impact of different methods on study conclusions. (orig.)
Czech Academy of Sciences Publication Activity Database
Berezovski, A.; Kolman, Radek; Blažek, Jiří; Kopačka, Ján; Gabriel, Dušan; Plešek, Jiří
2014-01-01
Roč. 19, č. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315 Institutional support: RVO:61388998 Keywords : elastic wave propagation * finite element method * isogeometric analysis * finite volume method * stress discontinuities * spurious oscillations Subject RIV: JR - Other Machinery http://www.ndt.net/events/ECNDT2014/app/content/Paper/25_Berezovski_Rev1.pdf
Flow simulation of a Pelton bucket using finite volume particle method
International Nuclear Information System (INIS)
Vessaz, C; Jahanbakhsh, E; Avellan, F
2014-01-01
The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets
Noninvasive measurement of cardiopulmonary blood volume: evaluation of the centroid method
International Nuclear Information System (INIS)
Fouad, F.M.; MacIntyre, W.J.; Tarazi, R.C.
1981-01-01
Cardiopulmonary blood volume (CPV) and mean pulmonary transit time (MTT) determined by radionuclide measurements (Tc-99m HSA) were compared with values obtained from simultaneous dye-dilution (DD) studies (indocyanine green). The mean transit time was obtained from radionuclide curves by two methods: the peak-to-peak time and the interval between the two centroids determined from the right and left-ventricular time-concentration curves. Correlation of dye-dilution MTT and peak-to-peak time was significant (r = 0.79, p < 0.001), but its correlation with centroid-derived values was better (r = 0.86, p < 0.001). CPV values (using the centroid method for radionuclide technique) correlated significantly with values derived from dye-dilution curves (r = 0.74, p < 0.001). Discrepancies between the two were greater the more rapid the circulation (r = 0.61, p < 0.01), suggesting that minor inaccuracies of dye-dilution methods, due to positioning or delay of the system, can become magnified in hyperkinetic conditions. The radionuclide method is simple, repeatable, and noninvasive, and it provides simultaneous evaluation of pulmonary and systemic hemodynamics. Further, calculation of the ratio of cardiopulmonary to total blood volume can be used as an index of overall venous distensibility and relocation of intravascular blood volume
International Nuclear Information System (INIS)
Ford, P.J.; Turina, P.J.; Seely, D.E.
1984-12-01
Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that arise during routine waste site and hazardous spill investigations. The sampling methods presented in this document are compiled by media, and were selected on the basis of practicality, economics, representativeness, compatability with analytical considerations, and safety, as well as other criteria. In addition to sampling procedures, sample handling and shipping, chain-of-custody procedures, instrument certification, equipment fabrication, and equipment decontamination procedures are described. Sampling methods for soil, sludges, sediments, and bulk materials cover the solids medium. Ten methods are detailed for surface waters, groundwater and containerized liquids; twelve are presented for ambient air, soil gases and vapors, and headspace gases. A brief discussion of ionizing radiation survey instruments is also provided
An enhanced finite volume method to model 2D linear elastic structures
CSIR Research Space (South Africa)
Suliman, Ridhwaan
2014-04-01
Full Text Available . Suliman) Preprint submitted to Applied Mathematical Modelling July 22, 2013 Keywords: finite volume, finite element, locking, error analysis 1. Introduction Since the 1960s, the finite element method has mainly been used for modelling the mechanics... formulation provides higher accuracy 2 for displacement solutions. It is well known that the linear finite element formulation suffers from sensitivity to element aspect ratio or shear locking when subjected to bend- ing [16]. Fallah [8] and Wheel [6] present...
Application of the finite volume method in the simulation of saturated flows of binary mixtures
International Nuclear Information System (INIS)
Murad, M.A.; Gama, R.M.S. da; Sampaio, R.
1989-12-01
This work presents the simulation of saturated flows of an incompressible Newtonian fluid through a rigid, homogeneous and isotropic porous medium. The employed mathematical model is derived from the Continuum Theory of Mixtures and generalizes the classical one which is based on Darcy's Law form of the momentum equation. In this approach fluid and porous matrix are regarded as continuous constituents of a binary mixture. The finite volume method is employed in the simulation. (author) [pt
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
International Nuclear Information System (INIS)
Lyman, J.T.; Wolbarst, A.B.
1987-01-01
To predict the likelihood of success of a therapeutic strategy, one must be able to assess the effects of the treatment upon both diseased and healthy tissues. This paper proposes a method for determining the probability that a healthy organ that receives a non-uniform distribution of X-irradiation, heat, chemotherapy, or other agent will escape complications. Starting with any given dose distribution, a dose-cumulative-volume histogram for the organ is generated. This is then reduced by an interpolation scheme (involving the volume-weighting of complication probabilities) to a slightly different histogram that corresponds to the same overall likelihood of complications, but which contains one less step. The procedure is repeated, one step at a time, until there remains a final, single-step histogram, for which the complication probability can be determined. The formalism makes use of a complication response function C(D, V) which, for the given treatment schedule, represents the probability of complications arising when the fraction V of the organ receives dose D and the rest of the organ gets none. Although the data required to generate this function are sparse at present, it should be possible to obtain the necessary information from in vivo and clinical studies. Volume effects are taken explicitly into account in two ways: the precise shape of the patient's histogram is employed in the calculation, and the complication response function is a function of the volume
Report of a CSNI workshop on uncertainty analysis methods. Volume 1 + 2
International Nuclear Information System (INIS)
Wickett, A.J.; Yadigaroglu, G.
1994-08-01
The OECD NEA CSNI Principal Working Group 2 (PWG2) Task Group on Thermal Hydraulic System Behaviour (TGTHSB) has, in recent years, received presentations of a variety of different methods to analyze the uncertainty in the calculations of advanced unbiased (best estimate) codes. Proposals were also made for an International Standard Problem (ISP) to compare the uncertainty analysis methods. The objectives for the Workshop were to discuss and fully understand the principles of uncertainty analysis relevant to LOCA modelling and like problems, to examine the underlying issues from first principles, in preference to comparing and contrasting the currently proposed methods, to reach consensus on the issues identified as far as possible while not avoiding the controversial aspects, to identify as clearly as possible unreconciled differences, and to issue a Status Report. Eight uncertainty analysis methods were presented. A structured discussion of various aspects of uncertainty analysis followed - the need for uncertainty analysis, identification and ranking of uncertainties, characterisation, quantification and combination of uncertainties and applications, resources and future developments. As a result, the objectives set out above were, to a very large extent, achieved. Plans for the ISP were also discussed. Volume 1 contains a record of the discussions on uncertainty methods. Volume 2 is a compilation of descriptions of the eight uncertainty analysis methods presented at the workshop
Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao
2015-08-01
Volume-infection relation studies have been published for high-risk surgical procedures, although the conclusions remain controversial. Inconsistent results may be caused by inconsistent categorization methods, the definitions of service volume, and different statistical approaches. The purpose of this study was to examine whether a relation exists between provider volume and coronary artery bypass graft (CABG) surgical site infection (SSI) using different categorization methods. A population-based cross-sectional multi-level study was conducted. A total of 10,405 patients who received CABG surgery between 2006 and 2008 in Taiwan were recruited. The outcome of interest was surgical site infection for CABG surgery. The associations among several patient, surgeon, and hospital characteristics was examined. The definition of surgeons' and hospitals' service volume was the cumulative CABG service volumes in the previous year for each CABG operation and categorized by three types of approaches: Continuous, quartile, and k-means clustering. The results of multi-level mixed effects modeling showed that hospital volume had no association with SSI. Although the relation between surgeon volume and surgical site infection was negative, it was inconsistent among the different categorization methods. Categorization of service volume is an important issue in volume-infection study. The findings of the current study suggest that different categorization methods might influence the relation between volume and SSI. The selection of an optimal cutoff point should be taken into account for future research.
Semiautomatic volume of interest drawing for 18F-FDG image analysis - method and preliminary results
International Nuclear Information System (INIS)
Green, A.J.; Baig, S.; Begent, R.H.J.; Francis, R.J.
2008-01-01
Functional imaging of cancer adds important information to the conventional measurements in monitoring response. Serial 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which indicates changes in glucose metabolism in tumours, shows great promise for this. However, there is a need for a method to quantitate alterations in uptake of FDG, which accounts for changes in tumour volume and intensity of FDG uptake. Selection of regions or volumes [ROI or volumes of interest (VOI)] by hand drawing, or simple thresholding, suffers from operator-dependent drawbacks. We present a simple, robust VOI growing method for this application. The method requires a single seed point within the visualised tumour and another in relevant normal tissue. The drawn tumour VOI is insensitive to the operator inconsistency and is, thus, a suitable basis for comparative measurements. The method is validated using a software phantom. We demonstrate the use of the method in the assessment of tumour response in 31 patients receiving chemotherapy for various carcinomas. Valid assessment of tumour response could be made 2-4 weeks after starting chemotherapy, giving information for clinical decision making which would otherwise have taken 9-12 weeks. Survival was predicted from FDG-PET 2-4 weeks after starting chemotherapy (p = 0.04) and after 9-12 weeks FDG-PET gave a better prediction of survival (p = 0.002) than CT or MRI (p = 0.015). FDG-PET using this method of analysis has potential as a routine tool for optimising use of chemotherapy and improving its cost effectiveness. It also has potential for increasing the accuracy of response assessment in clinical trials of novel therapies. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ryu, Seung Yeob [Korea Atomic Energy Research Institute (KAERI), Daejeon (Korea, Republic of); Ko, Sung Ho [Dept. of Mechanical Design Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2012-08-15
The volume of fluid (VOF) model of FLUENT and the lattice Boltzmann method (LBM) are used to simulate two-phase flows. Both methods are validated for static and dynamic bubble test cases and then compared to experimental results. The VOF method does not reduce the spurious currents of the static droplet test and does not satisfy the Laplace law for small droplets at the acceptable level, as compared with the LBM. For single bubble flows, simulations are executed for various Eotvos numbers, Morton numbers and Reynolds numbers, and the results of both methods agree well with the experiments in the case of low Eotvos numbers. For high Eotvos numbers, the VOF results deviated from the experiments. For multiple bubbles, the bubble flow characteristics are related by the wake of the leading bubble. The coaxial and oblique coalescence of the bubbles are simulated successfully and the subsequent results are presented. In conclusion, the LBM performs better than the VOF method.
Extrusion Process by Finite Volume Method Using OpenFoam Software
International Nuclear Information System (INIS)
Matos Martins, Marcelo; Tonini Button, Sergio; Divo Bressan, Jose; Ivankovic, Alojz
2011-01-01
The computational codes are very important tools to solve engineering problems. In the analysis of metal forming process, such as extrusion, this is not different because the computational codes allow analyzing the process with reduced cost. Traditionally, the Finite Element Method is used to solve solid mechanic problems, however, the Finite Volume Method (FVM) have been gaining force in this field of applications. This paper presents the velocity field and friction coefficient variation results, obtained by numerical simulation using the OpenFoam Software and the FVM to solve an aluminum direct cold extrusion process.
Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I
Energy Technology Data Exchange (ETDEWEB)
Goheen, S.C.; McCulloch, M.; Daniel, J.L.
1993-05-01
This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.
A large volume striped bass egg incubation chamber: design and comparison with a traditional method
Harper, C.J.
2009-01-01
I conducted a comparative study of a new jar design (experimental chamber) with a standard egg incubation vessel (McDonald jar). Experimental chambers measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. McDonald hatching jars measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. Post-hatch survival was estimated at 48, 96 and 144 h. Stocking rates resulted in an average egg density of 21.9 eggs ml-1 (range = 21.6 – 22.1) for McDonald jars and 10.9 eggs ml-1 (range = 7.0 – 16.8) for experimental chambers. I was unable to detect an effect of container type on survival to 48, 96 or 144 h. At 144 h striped bass fry survival averaged 37.3% for McDonald jars and 34.2% for experimental chambers. Survival among replicates was significantly different. Survival of striped bass significantly decreased between 96 and 144 h. Mean survival among replicates ranged from 12.4 to 57.3%. I was unable to detect an effect of initial stocking density on survival. Experimental jars allow for incubation of a larger number of eggs in a much smaller space. As hatchery production is often limited by space or water supply, experimental chambers offer an alternative to extending spawning activities, thereby reducing manpower and cost. However, the increase in the number of eggs per rearing container does increase the risk associated with catastrophic loss of a production unit. I conclude the experimental chamber is suitable for striped bass egg incubation.
International Nuclear Information System (INIS)
Toyoshima, Hideo; Ishibashi, Masayoshi; Senju, Syoji; Tanaka, Hideki; Aritomi, Takamichi; Watanabe, Kentaro; Yoshida, Minoru
1997-01-01
We examined the relationship between CT visual score and pulmonary function studies in patients with pulmonary emphysema. Lung volume was measured using helium dilution method and body plethysmographic method. Although airflow obstruction and overinflation measured by helium dilution method did not correlate with CT visual score, CO diffusing capacity per alveolar volume (DL CO /V A ) showed significant negative correlation with CT visual score (r=-0.49, p CO /V A reflect pathologic change in pulmonary emphysema. Further, both helium dilution method and body plethysmographic method are required to evaluate lung volume of pulmonary emphysema because of its ventilatory unevenness. (author)
International Nuclear Information System (INIS)
Magdeleine, S.
2009-11-01
This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)
Directory of Open Access Journals (Sweden)
Kiswanto Gandjar
2017-01-01
Full Text Available The increase in the volume of rough machining on the CBV area is one of the indicators of increased efficiencyof machining process. Normally, this area is not subject to the rough machining process, so that the volume of the rest of the material is still big. With the addition of CC point and tool orientation to CBV area on a complex surface, the finishing will be faster because the volume of the excess material on this process will be reduced. This paper presents a method for volume calculation of the parts which do not allow further occurrence of the machining process, particulary for rough machining on a complex object. By comparing the total volume of raw materials and machining area volume, the volume of residual material,on which machining process cannot be done,can be determined. The volume of the total machining area has been taken into account for machiningof the CBV and non CBV areas. By using delaunay triangulation for the triangle which includes the machining and CBV areas. The volume will be calculated using Divergence(Gaussian theorem by focusing on the direction of the normal vector on each triangle. This method can be used as an alternative to selecting tothe rough machining methods which select minimum value of nonmachinable volume so that effectiveness can be achieved in the machining process.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity
A novel finite volume discretization method for advection-diffusion systems on stretched meshes
Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.
2018-06-01
This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.
An efficicient data structure for three-dimensional vertex based finite volume method
Akkurt, Semih; Sahin, Mehmet
2017-11-01
A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
International Nuclear Information System (INIS)
Malkov, Serghei; Wang, Jeff; Kerlikowske, Karla; Cummings, Steven R.; Shepherd, John A.
2009-01-01
Purpose: This study describes the design and characteristics of a highly accurate, precise, and automated single-energy method to quantify percent fibroglandular tissue volume (%FGV) and fibroglandular tissue volume (FGV) using digital screening mammography. Methods: The method uses a breast tissue-equivalent phantom in the unused portion of the mammogram as a reference to estimate breast composition. The phantom is used to calculate breast thickness and composition for each image regardless of x-ray technique or the presence of paddle tilt. The phantom adheres to the top of the mammographic compression paddle and stays in place for both craniocaudal and mediolateral oblique screening views. We describe the automated method to identify the phantom and paddle orientation with a three-dimensional reconstruction least-squares technique. A series of test phantoms, with a breast thickness range of 0.5-8 cm and a %FGV of 0%-100%, were made to test the accuracy and precision of the technique. Results: Using test phantoms, the estimated repeatability standard deviation equaled 2%, with a ±2% accuracy for the entire thickness and density ranges. Without correction, paddle tilt was found to create large errors in the measured density values of up to 7%/mm difference from actual breast thickness. This new density measurement is stable over time, with no significant drifts in calibration noted during a four-month period. Comparisons of %FGV to mammographic percent density and left to right breast %FGV were highly correlated (r=0.83 and 0.94, respectively). Conclusions: An automated method for quantifying fibroglandular tissue volume has been developed. It exhibited good accuracy and precision for a broad range of breast thicknesses, paddle tilt angles, and %FGV values. Clinical testing showed high correlation to mammographic density and between left and right breasts.
Monte Carlo method for critical systems in infinite volume: The planar Ising model.
Herdeiro, Victor; Doyon, Benjamin
2016-10-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru
2016-10-11
An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.
Mösch, D; Mösch, H P; Kaiser, G
1983-05-01
Two useful methods for the exact volumetric measurement of the neurocranium are introduced. Using these methods in a transversal study, mean ratio and range for normal boys and girls from birth to six months can be defined. Longitudinal studies and comparative literature research confirm the accuracy of these standard values. The application to pathological cases shows that hypogenesis of the neurocranium can be recognized earlier and more accurately by measuring the volume than by measuring the circumference of the head. Both methods are easy to apply and can be expected of every child. However, method B (dipping in the neurocranium and measuring the displaced water on a scale) has proved to be more accurate, simpler and faster.
Precision of a new bedside method for estimation of the circulating blood volume
DEFF Research Database (Denmark)
Christensen, P; Eriksen, B; Henneberg, S W
1993-01-01
The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before...... ventilation with the CO gas mixture. The amount of CO administered during each determination of CBV resulted in an increase in the CO saturation of hemoglobin of 2.1%-3.9%. A theoretical noise propagation analysis was performed by means of the Monte Carlo method. The analysis showed that a CO dose...... patients. The coefficients of variation were 6.2% and 4.7% in healthy and diseased subjects, respectively. Furthermore, the day-to-day variation of the method with respect to the total amount of circulating hemoglobin (nHb) and CBV was determined from duplicate estimates separated by 24-48 h. In conclusion...
Numerical simulation of bubble deformation in magnetic fluids by finite volume method
International Nuclear Information System (INIS)
Yamasaki, Haruhiko; Yamaguchi, Hiroshi
2017-01-01
Bubble deformation in magnetic fluids under magnetic field is investigated numerically by an interface capturing method. The numerical method consists of a coupled level-set and VOF (Volume of Fluid) method, combined with conservation CIP (Constrained Interpolation Profile) method with the self-correcting procedure. In the present study considering actual physical properties of magnetic fluid, bubble deformation under given uniform magnetic field is analyzed for internal magnetic field passing through a magnetic gaseous and liquid phase interface. The numerical results explain the mechanism of bubble deformation under presence of given magnetic field. - Highlights: • A magnetic field analysis is developed to simulate the bubble dynamics in magnetic fluid with two-phase interface. • The elongation of bubble increased with increasing magnetic flux intensities due to strong magnetic normal force. • Proposed technique explains the bubble dynamics, taking into account of the continuity of the magnetic flux density.
Arima, Nobuyuki; Nishimura, Reiki; Osako, Tomofumi; Nishiyama, Yasuyuki; Fujisue, Mamiko; Okumura, Yasuhiro; Nakano, Masahiro; Tashima, Rumiko; Toyozumi, Yasuo
2016-01-01
In this case-control study, we investigated the most suitable cell counting area and the optimal cutoff point of the Ki-67 index. Thirty recurrent cases were selected among hormone receptor (HR)-positive/HER2-negative breast cancer patients. As controls, 90 nonrecurrent cases were randomly selected by allotting 3 controls to each recurrent case based on the following criteria: age, nodal status, tumor size, and adjuvant endocrine therapy alone. Both the hot spot and the average area of the tumor were evaluated on a Ki-67 immunostaining slide. The median Ki-67 index value at the hot spot and average area were 25.0 and 14.5%, respectively. Irrespective of the area counted, the Ki-67 index value was significantly higher in all of the recurrent cases (p hot spot was the most suitable cutoff point for predicting recurrence. Moreover, higher x0394;Ki-67 index value (the difference between the hot spot and the average area, ≥10%) and lower progesterone receptor expression (hot spot strongly correlated with recurrence, and the optimal cutoff point was found to be 20%. © 2015 S. Karger AG, Basel.
Evaluation of bias-correction methods for ensemble streamflow volume forecasts
Directory of Open Access Journals (Sweden)
T. Hashino
2007-01-01
Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.
Simulation of Jetting in Injection Molding Using a Finite Volume Method
Directory of Open Access Journals (Sweden)
Shaozhen Hua
2016-05-01
Full Text Available In order to predict the jetting and the subsequent buckling flow more accurately, a three dimensional melt flow model was established on a viscous, incompressible, and non-isothermal fluid, and a control volume-based finite volume method was employed to discretize the governing equations. A two-fold iterative method was proposed to decouple the dependence among pressure, velocity, and temperature so as to reduce the computation and improve the numerical stability. Based on the proposed theoretical model and numerical method, a program code was developed to simulate melt front progress and flow fields. The numerical simulations for different injection speeds, melt temperatures, and gate locations were carried out to explore the jetting mechanism. The results indicate the filling pattern depends on the competition between inertial and viscous forces. When inertial force exceeds the viscous force jetting occurs, then it changes to a buckling flow as the viscous force competes over the inertial force. Once the melt contacts with the mold wall, the melt filling switches to conventional sequential filling mode. Numerical results also indicate jetting length increases with injection speed but changes little with melt temperature. The reasonable agreements between simulated and experimental jetting length and buckling frequency imply the proposed method is valid for jetting simulation.
Brachytherapy dose-volume histogram computations using optimized stratified sampling methods
International Nuclear Information System (INIS)
Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.
2002-01-01
A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points
Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods
Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.
2017-09-01
Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.
D.M.K.S. Kaulesar Sukul (D. M K S); P.Th. den Hoed (Pieter); T. Johannes (Tanja); R. van Dolder (R.); E. Benda (Eric)
1993-01-01
textabstractVolume changes can be measured either directly by water-displacement volumetry or by various indirect methods in which calculation of the volume is based on circumference measurements. The aim of the present study was to determine the most appropriate indirect method for lower leg volume
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
International Nuclear Information System (INIS)
Odenthal, H.J.
1982-01-01
Quantitative videodensitometry was studied with a view to its possible application as a new, non-invasive method of measuring cardiac stroke volume. To begin with, the accuracy of roentgen volumetric measurements was determined. After this, blood volume variations were measured by densitometry in five animal experiments. The findings were compared with the volumes measured by a flowmeter in the pulmonary artery. The total stroke volume was found to be proportional to the difference between the maximum and mean densitometric volume. A comparison between videodensitometry and other non-invasive methods showed that, in a stable circulatory system, the results of videodensitometry are equally reliable as, or even more reliable than, those of the conventional methods. (orig./MG) [de
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice
Analysis of the neutron flux in an annular pulsed reactor by using finite volume method
Energy Technology Data Exchange (ETDEWEB)
Silva, Mário A.B. da; Narain, Rajendra; Bezerra, Jair de L., E-mail: mabs500@gmail.com, E-mail: narain@ufpe.br, E-mail: jairbezerra@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociências. Departamento de Energia Nuclear
2017-07-01
Production of very intense neutron sources is important for basic nuclear physics and for material testing and isotope production. Nuclear reactors have been used as sources of intense neutron fluxes, although the achievement of such levels is limited by the inability to remove fission heat. Periodic pulsed reactors provide very intense fluxes by a rotating modulator near a subcritical core. A concept for the production of very intense neutron fluxes that combines features of periodic pulsed reactors and steady state reactors was proposed by Narain (1997). Such a concept is known as Very Intense Continuous High Flux Pulsed Reactor (VICHFPR) and was analyzed by using diffusion equation with moving boundary conditions and Finite Difference Method with Crank-Nicolson formalism. This research aims to analyze the flux distribution in the Very Intense Continuous Flux High Pulsed Reactor (VICHFPR) by using the Finite Volume Method and compares its results with those obtained by the previous computational method. (author)
Beutler, Gerhard
2005-01-01
G. Beutler's Methods of Celestial Mechanics is a coherent textbook for students as well as an excellent reference for practitioners. Volume II is devoted to the applications and to the presentation of the program system CelestialMechanics. Three major areas of applications are covered: (1) Orbital and rotational motion of extended celestial bodies. The properties of the Earth-Moon system are developed from the simplest case (rigid bodies) to more general cases, including the rotation of an elastic Earth, the rotation of an Earth partly covered by oceans and surrounded by an atmosphere, and the rotation of an Earth composed of a liquid core and a rigid shell (Poincaré model). (2) Artificial Earth Satellites. The oblateness perturbation acting on a satellite and the exploitation of its properties in practice is discussed using simulation methods (CelestialMechanics) and (simplified) first order perturbation methods. The perturbations due to the higher-order terms of the Earth's gravitational potential and reso...
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Analysis of the neutron flux in an annular pulsed reactor by using finite volume method
International Nuclear Information System (INIS)
Silva, Mário A.B. da; Narain, Rajendra; Bezerra, Jair de L.
2017-01-01
Production of very intense neutron sources is important for basic nuclear physics and for material testing and isotope production. Nuclear reactors have been used as sources of intense neutron fluxes, although the achievement of such levels is limited by the inability to remove fission heat. Periodic pulsed reactors provide very intense fluxes by a rotating modulator near a subcritical core. A concept for the production of very intense neutron fluxes that combines features of periodic pulsed reactors and steady state reactors was proposed by Narain (1997). Such a concept is known as Very Intense Continuous High Flux Pulsed Reactor (VICHFPR) and was analyzed by using diffusion equation with moving boundary conditions and Finite Difference Method with Crank-Nicolson formalism. This research aims to analyze the flux distribution in the Very Intense Continuous Flux High Pulsed Reactor (VICHFPR) by using the Finite Volume Method and compares its results with those obtained by the previous computational method. (author)
Scintigraphic method for evaluating reductions in local blood volumes in human extremities
DEFF Research Database (Denmark)
Blønd, L; Madsen, Jan Lysgård
2000-01-01
were carried out. No significant differences between results obtained by the use of one or two scintigraphic projections were found. The between-subject coefficient of variation was 14% in the lower limb experiment and 11% in the upper limb experiment. The within-subject coefficient of variation was 6......% in the lower limb experiment and 6% in the upper limb experiment. We found a significant relation (r = 0.42, p = 0.018) between the results obtained by the scintigraphic method and the plethysmographic method. In fractions, a mean reduction in blood volume of 0.49+0.14 (2 SD) was found after 1 min of elevation...... of the lower limb and a mean reduction of 0.45+/-0.10 (2 SD) after half a minute of elevation of the upper limb. We conclude that the method is precise and can be used in investigating physiologic and pathophysiologic mechanisms in relation to blood volumes of limbs not subject to research previously....
Method for volume reduction and encapsulation of water-bearing, low-level radioactive wastes
International Nuclear Information System (INIS)
1982-01-01
The invention relates to the processing of water-bearing wastes, especially those containing radioactive materials from nuclear power plants like light-water moderated and cooled reactors. The invention provides a method to reduce the volume of wastes like contaminated coolants and to dispose them safely. According to the invention, azeotropic drying is applied to remove the water. Distilation temperatures are chosen to be lower than the lowest boiling point of the mixture components. In the preferred version, a polymerizing monomer is used to obtain the azeotropic mixture. In doing so, encapsulation is possible by combination with a co-reactive polymer that envelopes the waste material. (G.J.P.)
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter
2004-01-01
An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...
A finite volume method for density driven flows in porous media
Directory of Open Access Journals (Sweden)
Hilhorst Danielle
2013-01-01
Full Text Available In this paper, we apply a semi-implicit finite volume method for the numerical simulation of density driven flows in porous media; this amounts to solving a nonlinear convection-diffusion parabolic equation for the concentration coupled with an elliptic equation for the pressure. We compute the solutions for two specific problems: a problem involving a rotating interface between salt and fresh water and the classical but difficult Henry’s problem. All solutions are compared to results obtained by running FEflow, a commercial software package for the simulation of groundwater flow, mass and heat transfer in porous media.
Directory of Open Access Journals (Sweden)
Fan Yuxin
2014-12-01
Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
International Nuclear Information System (INIS)
Śpiewak, Mateusz; Małek, Łukasz A.; Petryka, Joanna; Mazurkiewicz, Łukasz; Miłosz, Barbara; Biernacka, Elżbieta K.; Kowalski, Mirosław; Hoffman, Piotr; Demkow, Marcin; Miśko, Jolanta; Rużyłło, Witold
2012-01-01
Background: Previous studies have advocated quantifying pulmonary regurgitation (PR) by using PR volume (PRV) instead of commonly used PR fraction (PRF). However, physicians are not familiar with the use of PRV in clinical practice. The ratio of right ventricle (RV) volume to left ventricle volume (RV/LV) may better reflect the impact of PR on the heart than RV end-diastolic volume (RVEDV) alone. We aimed to compare the impact of PRV and PRF on RV size expressed as either the RV/LV ratio or RVEDV (mL/m 2 ). Methods: Consecutive patients with repaired tetralogy of Fallot were included (n = 53). PRV, PRF and ventricular volumes were measured with the use of cardiac magnetic resonance. Results: RVEDV was more closely correlated with PRV when compared with PRF (r = 0.686, p 2.0 [area under the curve (AUC) PRV = 0.770 vs AUC PRF = 0.777, p = 0.86]. Conversely, with the use of the RVEDV-based criterion (>170 mL/m 2 ), PRV proved to be superior over PRF (AUC PRV = 0.770 vs AUC PRF = 0.656, p = 0.0028]. Conclusions: PRV and PRF have similar significance as measures of PR when the RV/LV ratio is used instead of RVEDV. The RV/LV ratio is a universal marker of RV dilatation independent of the method of PR quantification applied (PRF vs PRV)
Energy Technology Data Exchange (ETDEWEB)
Mueller, Kathryn S. [The Ohio State University College of Medicine, Columbus, OH (United States); Long, Frederick R. [Nationwide Children' s Hospital, The Children' s Radiological Institute, Columbus, OH (United States); Flucke, Robert L. [Nationwide Children' s Hospital, Department of Pulmonary Medicine, Columbus, OH (United States); Castile, Robert G. [The Research Institute at Nationwide Children' s Hospital, Center for Perinatal Research, Columbus, OH (United States)
2010-10-15
Lung inflation and respiratory motion during chest CT affect diagnostic accuracy and reproducibility. To describe a simple volume-monitored (VM) method for performing reproducible, motion-free full inspiratory and end expiratory chest CT examinations in children. Fifty-two children with cystic fibrosis (mean age 8.8 {+-} 2.2 years) underwent pulmonary function tests and inspiratory and expiratory VM-CT scans (1.25-mm slices, 80-120 kVp, 16-40 mAs) according to an IRB-approved protocol. The VM-CT technique utilizes instruction from a respiratory therapist, a portable spirometer and real-time documentation of lung volume on a computer. CT image quality was evaluated for achievement of targeted lung-volume levels and for respiratory motion. Children achieved 95% of vital capacity during full inspiratory imaging. For end expiratory scans, 92% were at or below the child's end expiratory level. Two expiratory exams were judged to be at suboptimal volumes. Two inspiratory (4%) and three expiratory (6%) exams showed respiratory motion. Overall, 94% of scans were performed at optimal volumes without respiratory motion. The VM-CT technique is a simple, feasible method in children as young as 4 years to achieve reproducible high-quality full inspiratory and end expiratory lung CT images. (orig.)
Evaluation of left ventricular volume by MRI using modified Simpson's rule method
International Nuclear Information System (INIS)
Okamura, Masahiro; Kondo, Takeshi; Anno, Naoko
1990-01-01
The conventional contrast left ventriculogrpahy (LVG) has been the gold standard for estimating left ventricular volume (LVV), but it is an invasive technique, and volume overload must be caused by contrast medium. the true left ventricular (LV) long axis may not be obtained by LVG in routine right anterior oblique (RAO) projection. MRI, on the other hand, is noninvasive, does not require contrast medium, and permits to obtain the true LV long axis sections. Thus, MRI seems the ideal technique for estimating LVV. To estimate LVV, we have developed the on-line programs for calculating LVV by single-plane (SMS) or biplane modified Simpson's rule method (BMS), and have applied these programs to the water in the bottle with the elliptic short axis plane, normal volunteer and patients with various heart diseases. In the water phantom, the water volume calculated by the BMS was more accurate than the SMS. In nine normal volunteers, multiple LV short axis sections in each end-systole and end-diastole were obtained by ECG-gated spin echo MRI, LVV as standard was calculated by true Simpson's rule method (TS) on these images. Then both vertical and horizontal LV long axis sections were also obtained by ECG-gated field echo (FE) rephasing cine MRI, LVV was calculated by the BMS or SMS on these images. The BMS or SMS significantly correlated (r=0.974, r=0.927, 0.947) with TS for estimating LVV, respectively. In 20 patients with various heart diseases, both vertical and horizontal LV long axis sections were obtained by FE cine MRI. LVV (r=0.907 and r=0.901) and EF (r=0.822 and r=0.938) calculated by the SMS on the vertical or horizontal LV long axis sections significantly correlated with the conventional RAO-LVG, respectively. In conclusion, the MRI using our on-line programs would be clinically useful for estimating LVV and EF. (author)
Energy Technology Data Exchange (ETDEWEB)
Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)
1996-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Majander, E O.J.; Manninen, M T [VTT Energy, Espoo (Finland)
1997-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Huang, Renfang; Luo, Xianwu [Tsinghua University, Beijing (China); Ji, Bin [Wuhan University, Hubei (China)
2017-06-15
This paper presents the implementation and assessment of a modified Partially averaged Navier-Stokes (PANS) turbulence model which can successfully predict the transient cavitating turbulent flows. The proposed model treats the standard k-e model as the parent model, and its main distinctive features are to (1) formulate the unresolved-to-total kinetic energy ratio (f{sub k}) based on the local grid size as well as turbulence length scale, and (2) vary the f{sub k}-field both in space and time. Numerical simulation used the modified PANS model for the sheet/cloud cavitating flows around a three-dimensional Clark-Y hydrofoil. The available experimental data and calculations of the standard k-e model, the f{sub k} = 0.8 PANS model, the f{sub k} = 0.5 PANS model are also provided for comparisons. The results show that the modified PANS model accurately captures the transient cavitation features as observed in experiments, namely, the attached sheet cavity grows in the flow direction until to a maximum length and then it breaks into a highly turbulent cloud cavity with three-dimensional structures in nature. Time-averaged drag/lift coefficients together with the streamwise velocity profiles predicted by the proposed model are in good agreement with the experimental data, and improvements are shown when compared with results of the standard k-e model, the f{sub k} = 0.8 PANS model and the f{sub k} = 0.5 PANS model. Overall, the modified PANS model shows its encouraging capability of predicting the transient cavitating turbulent flows.
High order spectral volume and spectral difference methods on unstructured grids
Kannan, Ravishekar
The spectral volume (SV) and the spectral difference (SD) methods were developed by Wang and Liu and their collaborators for conservation laws on unstructured grids. They were introduced to achieve high-order accuracy in an efficient manner. Recently, these methods were extended to three-dimensional systems and to the Navier Stokes equations. The simplicity and robustness of these methods have made them competitive against other higher order methods such as the discontinuous Galerkin and residual distribution methods. Although explicit TVD Runge-Kutta schemes for the temporal advancement are easy to implement, they suffer from small time step limited by the Courant-Friedrichs-Lewy (CFL) condition. When the polynomial order is high or when the grid is stretched due to complex geometries or boundary layers, the convergence rate of explicit schemes slows down rapidly. Solution strategies to remedy this problem include implicit methods and multigrid methods. A novel implicit lower-upper symmetric Gauss-Seidel (LU-SGS) relaxation method is employed as an iterative smoother. It is compared to the explicit TVD Runge-Kutta smoothers. For some p-multigrid calculations, combining implicit and explicit smoothers for different p-levels is also studied. The multigrid method considered is nonlinear and uses Full Approximation Scheme (FAS). An overall speed-up factor of up to 150 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Euler equations for the 3rd order SD method. A study of viscous flux formulations was carried out for the SV method. Three formulations were used to discretize the viscous fluxes: local discontinuous Galerkin (LDG), a penalty method and the 2nd method of Bassi and Rebay. Fourier analysis revealed some interesting advantages for the penalty method. These were implemented in the Navier Stokes solver. An implicit and p-multigrid method was also implemented for the above. An overall speed
Teaching Thermal Hydraulics and Numerical Methods: An Introductory Control Volume Primer
International Nuclear Information System (INIS)
D. S. Lucas
2004-01-01
A graduate level course for Thermal Hydraulics (T/H) was taught through Idaho State University in the spring of 2004. A numerical approach was taken for the content of this course since the students were employed at the Idaho National Laboratory and had been users of T/H codes. The majority of the students had expressed an interest in learning about the Courant Limit, mass error, semi-implicit and implicit numerical integration schemes in the context of a computer code. Since no introductory text was found the author developed notes taught from his own research and courses taught for Westinghouse on the subject. The course started with a primer on control volume methods and the construction of a Homogeneous Equilibrium Model (HEM) (T/H) code. The primer was valuable for giving the students the basics behind such codes and their evolution to more complex codes for Thermal Hydraulics and Computational Fluid Dynamics (CFD). The course covered additional material including the Finite Element Method and non-equilibrium (T/H). The control volume primer and the construction of a three-equation (mass, momentum and energy) HEM code are the subject of this paper . The Fortran version of the code covered in this paper is elementary compared to its descendants. The steam tables used are less accurate than the available commercial version written in C Coupled to a Graphical User Interface (GUI). The Fortran version and input files can be downloaded at www.microfusionlab.com
Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique
Directory of Open Access Journals (Sweden)
Jahedul Islam Chowdhury
2015-12-01
Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.
Dynamics study of free volume properties of SMA/SMMA blends by PAL method
International Nuclear Information System (INIS)
Jiang, Z.Y.; Jiang, X.Q.; Huang, Y.J.; Lin, J.; Li, S.M.; Li, S.Z.; Hsia, Y.F.
2006-01-01
Miscibility of poly(styrene-co-maleic anhydride) (containing 7 wt% maleic anhydride)/poly(styrene-co-methyl methacrylate) (containing 40 wt% styrene) blends were previously studied. It was obtained that SMA70 (containing 70 wt% of SMA in SMA/SMMA blends) is miscible in molecular level but SMA20 is not. In this paper, the two blends selected were used to investigate the temperature dependence of free volume parameters. It showed there are different deviations of free volume parameters in SMA20 and SMA70, and it was interesting that temperature dependence of ortho-positronium lifetime τ 3 of the SMA20 mixture exhibits two breaks in the range temperature from 90 deg. C to 120 deg. C, which revealed that the mixture has two glass transition ranges. Also, ortho-positronium lifetime τ 3 of the SMA20 mixture is nearly constant in the temperature range from 130 deg. C to 160 deg. C. These indicated that SMA20 blend is phase-separated in room temperature and become miscible above 130 deg. C, which may be due to steric hindrance effect of phenyl rings of SMMA and SMA. From the deviation of o-Ps lifetimes of SMA70, the single glass transition temperature of SMA70 blend was shown. Combining the previous study, it was further concluded that PAL method seems to be a powerful method to detect in situ phase behavior of immiscible polymer blends and glass transition of miscible polymer blends
International Nuclear Information System (INIS)
Achten, E.; Deblaere, K.; Damme, F. van; Kunnen, M.; Wagter, C. de; Boon, P.; Reuck, J. de
1998-01-01
We studied the intra- and interobserver variability of volume measurments of the hippocampus (HC) and the amygdala as applied to the detection of HC atrophy in patients with complex partial seizures (CPE), measuring the volumes of the HC and amygdala of 11 normal volunteers and 12 patients with presumed CPE, using the manual ray-tracing method. Two independent observers performed these measurements twice each using home-made software. The intra- and interobserver variability of the absolute volumes and of the normalised left-to-right volume differences (δV) between the HC (δV HC ), the amygdala (δV A ) and the sum of both (δV HCA) were assessed. In our mainly right-handed normals, the right HC and amygdala were on average 0.05 and 0.03 ml larger respectively than on the left. The interobserver variability for volume measurements in normal subjects was 1.80 ml for the HC and 0.82 ml for the amygdala, the intraobserver variability roughly one third of these values. The interobserver variability coefficient in normals was 3.6 % for δV HCA , 4.7 % for δV HC and 7.3 % for δV A . The intraobserver variability coefficient was 3.4 % for δV HCA , 4.2 % for δV HC amd 5.6 % for δV A . The variability in patients was the same for volume differences less than 5 % either side of the interval for normality, but was higher when large volume differences were encountered, is probably due to the lack of thresholding and/or normalisation. Cutoff values for lateralisation with the δV were defined. No intra- or interobserver lateralisation differences were encountered with δV HCA and δV HC . From these observations we conclude that the manual ray-tracing method is a robust method for lateralisation in patients with TLE. Due to its higher variability, this method is less suited to measure absolute volumes. (orig.) (orig.)
Energy Technology Data Exchange (ETDEWEB)
Achten, E.; Deblaere, K.; Damme, F. van; Kunnen, M. [MR Department 1K12, University Hospital Gent (Belgium); Wagter, C. de [Department of Radiotherapy and Nuclear Medicine, University Hospital Gent (Belgium); Boon, P.; Reuck, J. de [Department of Neurology, University Hospital Gent (Belgium)
1998-09-01
We studied the intra- and interobserver variability of volume measurments of the hippocampus (HC) and the amygdala as applied to the detection of HC atrophy in patients with complex partial seizures (CPE), measuring the volumes of the HC and amygdala of 11 normal volunteers and 12 patients with presumed CPE, using the manual ray-tracing method. Two independent observers performed these measurements twice each using home-made software. The intra- and interobserver variability of the absolute volumes and of the normalised left-to-right volume differences ({delta}V) between the HC ({delta}V{sub HC}), the amygdala ({delta}V{sub A}) and the sum of both ({delta}V{sub HCA)} were assessed. In our mainly right-handed normals, the right HC and amygdala were on average 0.05 and 0.03 ml larger respectively than on the left. The interobserver variability for volume measurements in normal subjects was 1.80 ml for the HC and 0.82 ml for the amygdala, the intraobserver variability roughly one third of these values. The interobserver variability coefficient in normals was 3.6 % for {delta}V{sub HCA}, 4.7 % for {delta}V{sub HC} and 7.3 % for {delta}V{sub A}. The intraobserver variability coefficient was 3.4 % for {delta}V{sub HCA}, 4.2 % for {delta}V{sub HC} amd 5.6 % for {delta}V{sub A}. The variability in patients was the same for volume differences less than 5 % either side of the interval for normality, but was higher when large volume differences were encountered, is probably due to the lack of thresholding and/or normalisation. Cutoff values for lateralisation with the {delta}V were defined. No intra- or interobserver lateralisation differences were encountered with {delta}V{sub HCA} and {delta}V{sub HC}. From these observations we conclude that the manual ray-tracing method is a robust method for lateralisation in patients with TLE. Due to its higher variability, this method is less suited to measure absolute volumes. (orig.) (orig.) With 2 figs., 7 tabs., 23 refs.
Measurement of disintegration rates of 60Co volume sources by the sum-peak method
International Nuclear Information System (INIS)
Kawano, Takao; Ebihara, Hiroshi
1991-01-01
The sum-peak method has been applied to the determination of the disintegration rates of 60 Co volume sources (1.05 x 10 4 Bq, 1.05 x 10 3 Bq and 1.05 x 10 2 Bq, in 100-ml polyethylene bottles) by using a NaI(Tl) detector of a diameter of 50 mm and a height of 50 mm. The experimental results showed that decreasing the disintegration rates resulted in enlarged underestimation in comparison with the true disintegration rates. It was presumed that the underestimations of the disintegration rates determined by the sum-peak method resulted from the overestimations of the areas under the sum peaks caused by the overlap of the area under the Compton scattering of the γ-ray (2614 keV) emitted from a naturally occurring radionuclide 208 Tl under the sum peaks. (author)
Gas permeation measurement under defined humidity via constant volume/variable pressure method
Jan Roman, Pauls
2012-02-01
Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.
Test Functions for Three-Dimensional Control-Volume Mixed Finite-Element Methods on Irregular Grids
National Research Council Canada - National Science Library
Naff, R. L; Russell, T. F; Wilson, J. D
2000-01-01
.... For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error...
Qin, J. J.; Jones, M.; Shiota, T.; Greenberg, N. L.; Firstenberg, M. S.; Tsujino, H.; Zetts, A. D.; Sun, J. P.; Cardon, L. A.; Odabashian, J. A.;
2000-01-01
AIM: The aim of this study was to investigate the feasibility and accuracy of using symmetrically rotated apical long axis planes for the determination of left ventricular (LV) volumes with real-time three-dimensional echocardiography (3DE). METHODS AND RESULTS: Real-time 3DE was performed in six sheep during 24 haemodynamic conditions with electromagnetic flow measurements (EM), and in 29 patients with magnetic resonance imaging measurements (MRI). LV volumes were calculated by Simpson's rule with five 3DE methods (i.e. apical biplane, four-plane, six-plane, nine-plane (in which the angle between each long axis plane was 90 degrees, 45 degrees, 30 degrees or 20 degrees, respectively) and standard short axis views (SAX)). Real-time 3DE correlated well with EM for LV stroke volumes in animals (r=0.68-0.95) and with MRI for absolute volumes in patients (r-values=0.93-0.98). However, agreement between MRI and apical nine-plane, six-plane, and SAX methods in patients was better than those with apical four-plane and bi-plane methods (mean difference = -15, -18, -13, vs. -31 and -48 ml for end-diastolic volume, respectively, Pmethods of real-time 3DE correlated well with reference standards for calculating LV volumes. Balancing accuracy and required time for these LV volume measurements, the apical six-plane method is recommended for clinical use.
Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Hendrick, Elizabeth M; Tino, Vincent R; Hanna, Steven R; Egan, Bruce A
2013-07-01
The U.S. Environmental Protection Agency (EPA) plume volume molar ratio method (PVMRM) and the ozone limiting method (OLM) are in the AERMOD model to predict the 1-hr average NO2/NO(x) concentration ratio. These ratios are multiplied by the AERMOD predicted NO(x) concentration to predict the 1-hr average NO2 concentration. This paper first briefly reviews PVMRM and OLM and points out some scientific parameterizations that could be improved (such as specification of relative dispersion coefficients) and then discusses an evaluation of the PVMRM and OLM methods as implemented in AERMOD using a new data set. While AERMOD has undergone many model evaluation studies in its default mode, PVMRM and OLM are nondefault options, and to date only three NO2 field data sets have been used in their evaluations. Here AERMOD/PVMRM and AERMOD/OLM codes are evaluated with a new data set from a northern Alaskan village with a small power plant. Hourly pollutant concentrations (NO, NO2, ozone) as well as meteorological variables were measured at a single monitor 500 m from the plant. Power plant operating parameters and emissions were calculated based on hourly operator logs. Hourly observations covering 1 yr were considered, but the evaluations only used hours when the wind was in a 60 degrees sector including the monitor and when concentrations were above a threshold. PVMRM is found to have little bias in predictions of the C(NO2)/C(NO(x)) ratio, which mostly ranged from 0.2 to 0.4 at this site. OLM overpredicted the ratio. AERMOD overpredicts the maximum NO(x) concentration but has an underprediction bias for lower concentrations. AERMOD/PVMRM overpredicts the maximum C(NO2) by about 50%, while AERMOD/OLM overpredicts by a factor of 2. For 381 hours evaluated, there is a relative mean bias in C(NO2) predictions of near zero for AERMOD/PVMRM, while the relative mean bias reflects a factor of 2 overprediction for AERMOD/OLM. This study was initiated because the new stringent 1-hr NO2
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Tamura, M; Shirato, H [Department of Radiation Oncology, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Ito, Y [Department of Biostatistics, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Sakurai, H; Mizumoto, M; Kamizawa, S [Proton Medical Research Center, University of Tsukuba, Tsukuba, Ibaraki (Japan); Murayama, S; Yamashita, H [Proton Therapy Division, Shizuoka Cancer Center Hospital, Nagaizumi, Shizuoka (Japan); Takao, S; Suzuki, R [Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido (Japan)
2016-06-15
Purpose: To examine how much lifetime attributable risk (LAR) as an in silico surrogate marker of radiation-induced secondary cancer would be lowered by using proton beam therapy (PBT) in place of intensity modulated x-ray therapy (IMXT) in pediatric patients. Methods: From 242 pediatric patients with cancers who were treated with PBT, 26 patients were selected by random sampling after stratification into four categories: a) brain, head, and neck, b) thoracic, c) abdominal, and d) whole craniospinal (WCNS) irradiation. IMXT was re-planned using the same computed tomography and region of interest. Using dose volume histogram (DVH) of PBT and IMXT, the LAR of Schneider et al. was calculated for the same patient. The published four dose-response models for carcinoma induction: i) full model, ii) bell-shaped model, iii) plateau model, and ix) linear model were tested for organs at risk. In the case that more than one dose-response model was available, the LAR for this patient was calculated by averaging LAR for each dose-response model. Results: Calculation of the LARs of PBT and IMXT based on DVH was feasible for all patients. The mean±standard deviation of the cumulative LAR difference between PBT and IMXT for the four categories was a) 0.77±0.44% (n=7, p=0.0037), b) 23.1±17.2%,(n=8, p=0.0067), c) 16.4±19.8% (n=8, p=0.0525), and d) 49.9±21.2% (n=3, p=0.0275, one tailed t-test), respectively. The LAR was significantly lower by PBT than IMXT for the the brain, head, and neck region, thoracic region, and whole craniospinal irradiation. Conclusion: In pediatric patients who had undergone PBT, the LAR of PBT was significantly lower than the LAR of IMXT estimated by in silico modeling. This method was suggested to be useful as an in silico surrogate marker of secondary cancer induced by different radiotherapy techniques. This research was supported by the Translational Research Network Program, JSPS KAKENHI Grant No. 15H04768 and the Global Institution for
International Nuclear Information System (INIS)
Tamura, M; Shirato, H; Ito, Y; Sakurai, H; Mizumoto, M; Kamizawa, S; Murayama, S; Yamashita, H; Takao, S; Suzuki, R
2016-01-01
Purpose: To examine how much lifetime attributable risk (LAR) as an in silico surrogate marker of radiation-induced secondary cancer would be lowered by using proton beam therapy (PBT) in place of intensity modulated x-ray therapy (IMXT) in pediatric patients. Methods: From 242 pediatric patients with cancers who were treated with PBT, 26 patients were selected by random sampling after stratification into four categories: a) brain, head, and neck, b) thoracic, c) abdominal, and d) whole craniospinal (WCNS) irradiation. IMXT was re-planned using the same computed tomography and region of interest. Using dose volume histogram (DVH) of PBT and IMXT, the LAR of Schneider et al. was calculated for the same patient. The published four dose-response models for carcinoma induction: i) full model, ii) bell-shaped model, iii) plateau model, and ix) linear model were tested for organs at risk. In the case that more than one dose-response model was available, the LAR for this patient was calculated by averaging LAR for each dose-response model. Results: Calculation of the LARs of PBT and IMXT based on DVH was feasible for all patients. The mean±standard deviation of the cumulative LAR difference between PBT and IMXT for the four categories was a) 0.77±0.44% (n=7, p=0.0037), b) 23.1±17.2%,(n=8, p=0.0067), c) 16.4±19.8% (n=8, p=0.0525), and d) 49.9±21.2% (n=3, p=0.0275, one tailed t-test), respectively. The LAR was significantly lower by PBT than IMXT for the the brain, head, and neck region, thoracic region, and whole craniospinal irradiation. Conclusion: In pediatric patients who had undergone PBT, the LAR of PBT was significantly lower than the LAR of IMXT estimated by in silico modeling. This method was suggested to be useful as an in silico surrogate marker of secondary cancer induced by different radiotherapy techniques. This research was supported by the Translational Research Network Program, JSPS KAKENHI Grant No. 15H04768 and the Global Institution for
International Nuclear Information System (INIS)
Urbatsch, Todd J.; Evans, Thomas M.; Hughes, H. Grady
2001-01-01
Monte Carlo particle transport plays an important role in some multi-physics simulations. These simulations, which may additionally involve deterministic calculations, typically use a hexahedral or tetrahedral mesh. Trilinear hexahedrons are attractive for physics calculations because faces between cells are uniquely defined, distance-to-boundary calculations are deterministic, and hexahedral meshes tend to require fewer cells than tetrahedral meshes. We discuss one aspect of Monte Carlo transport: sampling a position in a tri-linear hexahedron, which is made up of eight control points, or nodes, and six bilinear faces, where each face is defined by four non-coplanar nodes in three-dimensional Cartesian space. We derive, code, and verify the exact sampling method and propose an approximation to it. Our proposed approximate method uses about one-third the memory and can be twice as fast as the exact sampling method, but we find that its inaccuracy limits its use to well-behaved hexahedrons. Daunted by the expense of the exact method, we propose an alternate approximate sampling method. First, calculate beforehand an approximate volume for each corner of the hexahedron by taking one-eighth of the volume of an imaginary parallelepiped defined by the corner node and the three nodes to which it is directly connected. For the sampling, assume separability in the parameters, and sample each parameter, in turn, from a linear pdf defined by the sum of the four corner volumes at each limit (-1 and 1) of the parameter. This method ignores the quadratic portion of the pdf, but it requires less storage, has simpler sampling, and needs no extra, on-the-fly calculations. We simplify verification by designing tests that consist of one or more cells that entirely fill a unit cube. Uniformly sampling complicated cells that fill a unit cube will result in uniformly sampling the unit cube. Unit cubes are easily analyzed. The first problem has four wedges (or tents, or A frames) whose
International Nuclear Information System (INIS)
Hergan, Klaus; Schuster, Antonius; Fruehwald, Julia; Mair, Michael; Burger, Ralph; Toepker, Michael
2008-01-01
Purpose: To compare ventricular volume measurement using a volumetric approach in the three standard cardiac planes and ventricular volume estimation by a geometrical model, the Area-Length method (ALM). Materials and methods: Fifty-six healthy volunteers were examined (27 males, 29 females) on a 1.5 T MR-unit with ECG-triggered steady state free precision (SSFP) Cine-MR sequences and parallel image acquisition. Multiple slices in standardized planes including the short-axis view (sa), 4-chamber view (4ch), left and right 2-chamber views (2ch) were used to cover the whole heart. End-systolic and end-diastolic ventricular volumes (EDV, ESV), stroke volume (SV), and ejection fraction (EF) were calculated with Simpson's rule in all planes and with ALM in the 2ch and 4ch planes. Global function parameters measured in the sa plane were compared with those obtained in the other imaging planes. Results: A very good correlation is observed when comparing functional parameters calculated with Simpson's rule in all imaging planes: for instance, the mean EDV/ESV of the left and right ventricle of the female population group measured in sa, 4ch, and 2ch: left ventricle EDV/ESV 114.3/44.4, 120.9/46.5, and 117.7/45.3 ml; right ventricle EDV/ESV 106.6/46.0, 101.2/41.1, and 103.5/43.0 ml. Functional parameters of the left ventricle calculated with ALM in 2ch and 4ch correlate to parameters obtained in sa with Simpson's rule in the range of 5-10%: for instance, the EDV/ESV of the left ventricle of the male population group measured in the sa, 4ch, and 2ch: 160.3/63.5, 163.1/59.0, and 167.0/65.7 ml. Functional parameters of the right ventricle measured with ALM in 4ch are 40-50% lower and calculated in 2ch almost double as high as compared with the parameters obtained in sa with Simpson's rule: for instance, male right ventricular EDV/ESV measured in sa, 4ch, and 2ch: 153.4/68.1, 97.5/34.5, and 280.2/123.2 ml. The EF correlates for all imaging planes measured with the Simpson's rule
Energy Technology Data Exchange (ETDEWEB)
Odano, Ikuo; Takahashi, Naoya; Ohtaki, Hiroh; Noguchi, Eikichi; Hatano, Masayoshi; Yamasaki, Yoshihiro; Nishihara, Mamiko (Niigata Univ. (Japan). School of Medicine); Ohkubo, Masaki; Yokoi, Takashi
1993-10-01
We developed a new graphic method using N-isopropyl-p-[[sup 123]I]iodoamphetamine (IMP) and SPECT of the brain, the graph on which all three parameters, cerebral blood flow, distribution volume (V[sub d]) and delayed count to early count ratio (Delayed/Early ratio), were able to be evaluated simultaneously. The kinetics of [sup 123]I-IMP in the brain was analyzed by a 2-compartment model, and a standard input function was prepared by averaging the time activity curves of [sup 123]I-IMP in arterial blood on 6 patients with small cerebral infarction etc. including 2 normal controls. Being applied this method to the differential diagnosis between Parkinson's disease and progressive supranuclear palsy, we were able to differentiate both with a glance, because the distribution volume of the frontal lobe significantly decreased in Parkinson's disease (Mean[+-]SD; 26[+-]6 ml/g). This method was clinically useful. We think that the distribution volume of [sup 123]I-IMP may reflect its retention mechanism in the brain, and the values are related to amine, especially to dopamine receptors and its metabolism. (author).
Energy Technology Data Exchange (ETDEWEB)
Odano, Ikuo; Takahashi, Naoya; Ohtaki, Hiroh; Noguchi, Eikichi; Hatano, Masayoshi; Yamasaki, Yoshihiro; Nishihara, Mamiko [Niigata Univ. (Japan). School of Medicine; Ohkubo, Masaki; Yokoi, Takashi
1993-10-01
We developed a new graphic method using N-isopropyl-p-[[sup 123]I]iodoamphetamine (IMP) and SPECT of the brain, the graph on which all three parameters, cerebral blood flow, distribution volume (V[sub d]) and delayed count to early count ratio (Delayed/Early ratio), were able to be evaluated simultaneously. The kinetics of [sup 123]I-IMP in the brain was analyzed by a 2-compartment model, and a standard input function was prepared by averaging the time activity curves of [sup 123]I-IMP in arterial blood on 6 patients with small cerebral infarction etc. including 2 normal controls. Being applied this method to the differential diagnosis between Parkinson's disease and progressive supranuclear palsy, we were able to differentiate both with a glance, because the distribution volume of the frontal lobe significantly decreased in Parkinson's disease (Mean[+-]SD; 26[+-]6 ml/g). This method was clinically useful. We think that the distribution volume of [sup 123]I-IMP may reflect its retention mechanism in the brain, and the values are related to amine, especially to dopamine receptors and its metabolism. (author).
Directory of Open Access Journals (Sweden)
A. D. Kliukvin
2014-01-01
Full Text Available There is theoretically investigated the influence of thermal dependence of air thermophysical properties on accuracy of heat transfer problems solution in a turbulent flow when using different methods of averaging the Navier-Stokes equations.There is analyzed the practicability of using particular method of averaging the NavierStokes equations when it’s necessary to clarify the solution of heat transfer problem taking into account the variability of air thermophysical properties.It’s shown that Reynolds and Favre averaging (the most common methods of averaging the Navier-Stokes equations are not effective in this case because these methods inaccurately describe behavior of large scale turbulent structures which strongly depends on geometry of particular flow. Thus it’s necessary to use more universal methods of turbulent flow simulation which are not based on averaging of all turbulent scales.In the article it’s shown that instead of Reynold and Favre averaging it’s possible to use large eddy simulation whereby turbulent structures are divided into small-scale and large-scale ones with subsequent modelling of small-scale ones only. But this approach leads to the necessarity of increasing the computational power by 2-3 orders.For different methods of averaging the form of additional terms of averaged Navier-Stokes equations in case of accounting pulsation of thermophysical properties of the air is obtained.On the example of a submerged heated air jet the errors (which occur when neglecting the thermal dependence of air thermophysical properties on averaged flow temperature in determination of convectional and conductive components of heat flux and viscous stresses are evaluated. It’s shown that the greatest increase of solution accuracy can be obtained in case of the flows with high temperature gradients.Finally using infinite Teylor series it’s found that underestimation of convective and conductive components of heat flux and
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Budiarso; Adanta, Dendy; Warjito; Siswantara, A. I.; Saputra, Pradhana; Dianofitra, Reza
2018-03-01
Rapid economic and population growth in Indonesia lead to increased energy consumption, including electricity needs. Pico hydro is considered as the right solution because the cost of investment and operational cost are fairly low. Additionally, Indonesia has many remote areas with high hydro-energy potential. The overshot waterwheel is one of technology that is suitable to be applied in remote areas due to ease of operation and maintenance. This study attempts to optimize bucket dimensions with the available conditions. In addition, the optimization also has a good impact on the amount of generated power because all available energy is utilized maximally. Analytical method is used to evaluate the volume of water contained in bucket overshot waterwheel. In general, there are two stages performed. First, calculation of the volume of water contained in each active bucket is done. If the amount total of water contained is less than the available discharge in active bucket, recalculation at the width of the wheel is done. Second, calculation of the torque of each active bucket is done to determine the power output. As the result, the mechanical power generated from the waterwheel is 305 Watts with the efficiency value of 28%.
SU-E-J-35: Using CBCT as the Alternative Method of Assessing ITV Volume
Energy Technology Data Exchange (ETDEWEB)
Liao, Y; Turian, J; Templeton, A; Redler, G; Chu, J [Rush University Medical Center, Chicago, IL (United States)
2015-06-15
Purpose To study the accuracy of Internal Target Volumes (ITVs) created on cone beam CT (CBCT) by comparing the visible target volume on CBCT to volumes (GTV, ITV, and PTV) outlined on free breathing (FB) CT and 4DCT. Methods A Quasar Cylindrical Motion Phantom with a 3cm diameter ball (14.14 cc) embedded within a cork insert was set up to simulate respiratory motion with a period of 4 seconds and amplitude of 2cm superioinferiorly and 1cm anterioposteriorly. FBCT and 4DCT images were acquired. A PTV-4D was created on the 4DCT by applying a uniform margin of 5mm to the ITV-CT. PTV-FB was created by applying a margin of the motion range plus 5mm, i.e. total of 1.5cm laterally and 2.5cm superioinferiorly to the GTV outlined on the FBCT. A dynamic conformal arc was planned to treat the PTV-FB with 1mm margin. A CBCT was acquired before the treatment, on which the target was delineated. During the treatment, the position of the target was monitored using the EPID in cine mode. Results ITV-CBCT and ITV-CT were measured to be 56.6 and 62.7cc, respectively, with a Dice Coefficient (DC) of 0.94 and disagreement in center of mass (COM) of 0.59 mm. On the other hand, GTV-FB was 11.47cc, 19% less than the known volume of the ball. PTV-FB and PTV-4D were 149 and 116 cc, with a DC of 0.71. Part of the ITV-CT was not enclosed by the PTV-FB despite the large margin. The cine EPID images have confirmed geometrical misses of the target. Similar under-coverage was observed in one clinical case and captured by the CBCT, where the implanted fiducials moved outside PTV-FB. Conclusion ITV-CBCT is in good agreement with ITV-CT. When 4DCT was not available, CBCT can be an effective alternative in determining and verifying the PTV margin.
Andriani, Tri; Irawan, Mohammad Isa
2017-08-01
Ebola Virus Disease (EVD) is a disease caused by a virus of the genus Ebolavirus (EBOV), family Filoviridae. Ebola virus is classifed into five types, namely Zaire ebolavirus (ZEBOV), Sudan ebolavirus (SEBOV), Bundibugyo ebolavirus (BEBOV), Tai Forest ebolavirus also known as Cote d'Ivoire ebolavirus (CIEBOV), and Reston ebolavirus (REBOV). Identification of kinship types of Ebola virus can be performed using phylogenetic trees. In this study, the phylogenetic tree constructed by UPGMA method in which there are Multiple Alignment using Progressive Method. The results concluded that the phylogenetic tree formation kinship ebola virus types that kind of Tai Forest ebolavirus close to Bundibugyo ebolavirus but the layout state ebola epidemic spread far apart. The genetic distance for this type of Bundibugyo ebolavirus with Tai Forest ebolavirus is 0.3725. Type Tai Forest ebolavirus similar to Bundibugyo ebolavirus not inuenced by the proximity of the area ebola epidemic spread.
Nangia, Nishant; Patankar, Neelesh A.; Bhalla, Amneet P. S.
2017-11-01
Fictitious domain methods for simulating fluid-structure interaction (FSI) have been gaining popularity in the past few decades because of their robustness in handling arbitrarily moving bodies. Often the transient net hydrodynamic forces and torques on the body are desired quantities for these types of simulations. In past studies using immersed boundary (IB) methods, force measurements are contaminated with spurious oscillations due to evaluation of possibly discontinuous spatial velocity of pressure gradients within or on the surface of the body. Based on an application of the Reynolds transport theorem, we present a moving control volume (CV) approach to computing the net forces and torques on a moving body immersed in a fluid. The approach is shown to be accurate for a wide array of FSI problems, including flow past stationary and moving objects, Stokes flow, and high Reynolds number free-swimming. The approach only requires far-field (smooth) velocity and pressure information, thereby suppressing spurious force oscillations and eliminating the need for any filtering. The proposed moving CV method is not limited to a specific IB method and is straightforward to implement within an existing parallel FSI simulation software. This work is supported by NSF (Award Numbers SI2-SSI-1450374, SI2-SSI-1450327, and DGE-1324585), the US Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231), and NIH (Award Number HL117163).
Stress strain modelling of casting processes in the framework of the control volume method
DEFF Research Database (Denmark)
Hattel, Jesper; Andersen, Søren; Thorborg, Jesper
1998-01-01
Realistic computer simulations of casting processes call for the solution of both thermal, fluid-flow and stress/strain related problems. The multitude of the influencing parameters, and their non-linear, transient and temperature dependent nature, make the calculations complex. Therefore the nee......, the present model is based on the mainly decoupled representation of the thermal, mechanical and microstructural processes. Examples of industrial applications, such as predicting residual deformations in castings and stress levels in die casting dies, are presented...... for fast, flexible, multidimensional numerical methods is obvious. The basis of the deformation and stress/strain calculation is a transient heat transfer analysis including solidification. This paper presents an approach where the stress/strain and the heat transfer analysis uses the same computational...... domain, which is highly convenient. The basis of the method is the control volume finite difference approach on structured meshes. The basic assumptions of the method are shortly reviewed and discussed. As for other methods which aim at application oriented analysis of casting deformations and stresses...
Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John
2017-12-01
Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.
International Nuclear Information System (INIS)
Mishra, Subhash C.; Roy, Hillol K.
2007-01-01
The lattice Boltzmann method (LBM) was used to solve the energy equation of a transient conduction-radiation heat transfer problem. The finite volume method (FVM) was used to compute the radiative information. To study the compatibility of the LBM for the energy equation and the FVM for the radiative transfer equation, transient conduction and radiation heat transfer problems in 1-D planar and 2-D rectangular geometries were considered. In order to establish the suitability of the LBM, the energy equations of the two problems were also solved using the FVM of the computational fluid dynamics. The FVM used in the radiative heat transfer was employed to compute the radiative information required for the solution of the energy equation using the LBM or the FVM (of the CFD). To study the compatibility and suitability of the LBM for the solution of energy equation and the FVM for the radiative information, results were analyzed for the effects of various parameters such as the scattering albedo, the conduction-radiation parameter and the boundary emissivity. The results of the LBM-FVM combination were found to be in excellent agreement with the FVM-FVM combination. The number of iterations and CPU times in both the combinations were found comparable
The finite volume element (FVE) and multigrid method for the incompressible Navier-Stokes equations
International Nuclear Information System (INIS)
Gu Lizhen; Bao Weizhu
1992-01-01
The authors apply FVE method to discrete INS equations with the original variable, in which the bilinear square finite element and the square finite volume are chosen. The discrete schemes of INS equations are presented. The FMV multigrid algorithm is applied to solve that discrete system, where DGS iteration is used as smoother, DGS distributive mode for the INS discrete system is also presented. The sample problems for the square cavity flow with Reynolds number Re≤100 are successfully calculated. The numerical solutions show that the results with 1 FMV is satisfactory and when Re is not large, The FVE discrete scheme of the conservative INS equations and that of non-conservative INS equations with linearization both can provide almost same accuracy
A solution of two-dimensional magnetohydrodynamic flow using the finite volume method
Directory of Open Access Journals (Sweden)
Naceur Sonia
2014-01-01
Full Text Available This paper presents the two dimensional numerical modeling of the coupling electromagnetic-hydrodynamic phenomena in a conduction MHD pump using the Finite volume Method. Magnetohydrodynamic problems are, thus, interdisciplinary and coupled, since the effect of the velocity field appears in the magnetic transport equations, and the interaction between the electric current and the magnetic field appears in the momentum transport equations. The resolution of the Maxwell's and Navier Stokes equations is obtained by introducing the magnetic vector potential A, the vorticity z and the stream function y. The flux density, the electromagnetic force, and the velocity are graphically presented. Also, the simulation results agree with those obtained by Ansys Workbench Fluent software.
Application of volume of fluid method for simulation of a droplet impacting a fiber
Directory of Open Access Journals (Sweden)
M. Khalili
2016-06-01
Full Text Available In the present work, impact of a Newtonian drop on horizontal thin fibers with circular cross section is simulated in 2D views. The numerical simulations of the phenomena are carried out using volume of fluid (VOF method for tracking the free surface motion. Impacting of a Newtonian droplet on a circular thin fiber (350μm radius investigated numerically. The main focus of this simulation is to acquire threshold radius and velocity of a drop which is entirely captured by the fiber. The model agrees well with the experiments and demonstrates the threshold radius decreased generally with the increase of impact velocity. In other words, for velocity larger than threshold velocity of capture perhaps only a small portion of fluid is stuck on the solid and the rest of the drop is ejected for impact velocity smaller than critical velocity the drop is totally captured. This threshold velocity has been determined when the impact is centered.
Methods development for assessing air pollution control benefits. Volume V, executive summary
International Nuclear Information System (INIS)
Brookshire, D.S.; Crocker, T.D.; d'Arge, R.C.; Ben-David, S.; Kneese, A.V.; Schulze, W.D.
1979-02-01
The studies summarized by this volume represent original efforts to construct both a conceptually consistent and empirically verifiable set of methods for assessing environmental quality improvement benefits. While the state-of-the-art does not at present make it possible to provide highly accurate estimates of the benefits of reduced human or plant exposure to air pollutants, these studies nevertheless provide a set of fundamental benchmarks on which further efforts might be built. There are: many benefits traditionally viewed as intangible and therefore non-measurable can, in fact, be measured and be made comparable to economic values as expressed in markets; aesthetic and morbidity effects may dominate the measure of benefits as opposed to previous emphases on mortality health effects; and the likely economic benefits of air quality improvements are perhaps as much as an order of magnitude greater than previous studies had hypothesized
Development of a high-order finite volume method with multiblock partition techniques
Directory of Open Access Journals (Sweden)
E. M. Lemos
2012-03-01
Full Text Available This work deals with a new numerical methodology to solve the Navier-Stokes equations based on a finite volume method applied to structured meshes with co-located grids. High-order schemes used to approximate advective, diffusive and non-linear terms, connected with multiblock partition techniques, are the main contributions of this paper. Combination of these two techniques resulted in a computer code that involves high accuracy due the high-order schemes and great flexibility to generate locally refined meshes based on the multiblock approach. This computer code has been able to obtain results with higher or equal accuracy in comparison with results obtained using classical procedures, with considerably less computational effort.
FINITE VOLUME METHOD FOR SOLVING THREE-DIMENSIONAL ELECTRIC FIELD DISTRIBUTION
Directory of Open Access Journals (Sweden)
Paţiuc V.I.
2011-04-01
Full Text Available The paper examines a new approach to finite volume method which is used to calculate the electric field spatially homogeneous three-dimensional environment. It is formulated the problem Dirihle with building of the computational grid on base of space partition, which is known as Delone triangulation with the use of Voronoi cells. It is proposed numerical algorithm for calculating the potential and electric field strength in the space formed by a cylinder placed in the air. It is developed algorithm and software which were for the case, when the potential on the inner surface of the cylinder has been assigned and on the outer surface and the bottom of cylinder it was assigned zero potential. There are presented results of calculations of distribution in the potential space and electric field strength.
International Nuclear Information System (INIS)
Montaudon, M.; Laffon, E.; Berger, P.; Corneloup, O.; Latrabe, V.; Laurent, F.
2006-01-01
This study compared a three-dimensional volumetric threshold-based method to a two-dimensional Simpson's rule based short-axis multiplanar method for measuring right (RV) and left ventricular (LV) volumes, stroke volumes, and ejection fraction using electrocardiography-gated multidetector computed tomography (MDCT) data sets. End-diastolic volume (EDV) and end-systolic volume (ESV) of RV and LV were measured independently and blindly by two observers from contrast-enhanced MDCT images using commercial software in 18 patients. For RV and LV the three-dimensionally calculated EDV and ESV values were smaller than those provided by two-dimensional short axis (10%, 5%, 15% and 26% differences respectively). Agreement between the two methods was found for LV (EDV/ESV: r=0.974/0.910, ICC=0.905/0.890) but not for RV (r=0.882/0.930, ICC=0.663/0.544). Measurement errors were significant only for EDV of LV using the two-dimensional method. Similar reproducibility was found for LV measurements, but the three-dimensional method provided greater reproducibility for RV measurements than the two-dimensional. The threshold value supported three-dimensional method provides reproducible cardiac ventricular volume measurements, comparable to those obtained using the short-axis Simpson based method. (orig.)
Factors affecting volume calculation with single photon emission tomography (SPECT) method
International Nuclear Information System (INIS)
Liu, T.H.; Lee, K.H.; Chen, D.C.P.; Ballard, S.; Siegel, M.E.
1985-01-01
Several factors may influence the calculation of absolute volumes (VL) from SPECT images. The effect of these factors must be established to optimize the technique. The authors investigated the following on the VL calculations: % of background (BG) subtraction, reconstruction filters, sample activity, angular sampling and edge detection methods. Transaxial images of a liver-trunk phantom filled with Tc-99m from 1 to 3 μCi/cc were obtained in 64x64 matrix with a Siemens Rota Camera and MDS computer. Different reconstruction filters including Hanning 20,32, 64 and Butterworth 20, 32 were used. Angular samplings were performed in 3 and 6 degree increments. ROI's were drawn manually and with an automatic edge detection program around the image after BG subtraction. VL's were calculated by multiplying the number of pixels within the ROI by the slice thickness and the x- and y- calibrations of each pixel. One or 2 pixel per slice thickness was applied in the calculation. An inverse correlation was found between the calculated VL and the % of BG subtraction (r=0.99 for 1,2,3 μCi/cc activity). Based on the authors' linear regression analysis, the correct liver VL was measured with about 53% BG subtraction. The reconstruction filters, slice thickness and angular sampling had only minor effects on the calculated phantom volumes. Detection of the ROI automatically by the computer was not as accurate as the manual method. The authors conclude that the % of BG subtraction appears to be the most important factor affecting the VL calculation. With good quality control and appropriate reconstruction factors, correct VL calculations can be achieved with SPECT
Directory of Open Access Journals (Sweden)
RAFAEL DAIBERT DE SOUZA MOTTA
2018-02-01
Full Text Available ABSTRACT Objectives: to assess the degree of patient satisfaction after undergoing breast augmentation and compare three different, easy, inexpensive and universal methods of preoperative choice of breast implant volume. Methods: a prospective study was carried out at University Hospital Pedro Ernesto of State University of Rio de Janeiro, in 94 women from Rio de Janeiro, aged 18 to 49 years, submitted to breast augmentation mammaplasty with breast implant due to hypomastia. All implants were textured, with a round base and high projection and were introduced into the retroglandular space through an inframammary access. The patients were divided into three groups: Control, Silicone and MamaSize®, with 44, 25 and 25 patients, respectively. Satisfaction questionnaires were applied in the pre and postoperative periods by the same evaluator, through the visual analogue scale, in which ‘0’ meant very unsatisfied and ‘100’ very satisfied for the four variables: shape, size, symmetry and consistency. The degree of satisfaction with the surgical scar was also assessed in the postoperative period. Results: when the preoperative and postoperative satisfaction levels were compared, there was a difference in all variables for the three groups, with statistical significance. However, when the postoperative data were compared with each other, there was no significant difference. The degree of satisfaction with the surgical scar was high. Conclusion: the augmentation mammaplasty with breast implant had a high index of satisfaction among patients. However, there was no difference in the degree of satisfaction in the postoperative period between the three methodologies of breast volume measurement.
Rakhmangulov, Aleksandr; Muravev, Dmitri; Mishkurov, Pavel
2016-11-01
The issue of operative data reception on location and movement of railcars is significant the constantly growing requirements of the provision of timely and safe transportation. The technical solution for efficiency improvement of data collection on rail rolling stock is the implementation of an identification system. Nowadays, there are several such systems, distinguished in working principle. In the authors' opinion, the most promising for rail transportation is the RFID technology, proposing the equipping of the railway tracks by the stationary points of data reading (RFID readers) from the onboard sensors on the railcars. However, regardless of a specific type and manufacturer of these systems, their implementation is affiliated with the significant financing costs for large, industrial, rail transport systems, owning the extensive network of special railway tracks with a large number of stations and loading areas. To reduce the investment costs for creation, the identification system of rolling stock on the special railway tracks of industrial enterprises has developed the method based on the idea of priority installation of the RFID readers on railway hauls, where rail traffic volumes are uneven in structure and power, parameters of which is difficult or impossible to predict on the basis of existing data in an information system. To select the optimal locations of RFID readers, the mathematical model of the staged installation of such readers has developed depending on the non-uniformity value of rail traffic volumes, passing through the specific railway hauls. As a result of that approach, installation of the numerous RFID readers at all station tracks and loading areas of industrial railway stations might be not necessary,which reduces the total cost of the rolling stock identification and the implementation of the method for optimal management of transportation process.
Energy Technology Data Exchange (ETDEWEB)
Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)
2016-05-15
The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and
Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid
Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.
2001-08-01
This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright
Response matrix of regular moderator volumes with 3He detector using Monte Carlo methods
International Nuclear Information System (INIS)
Baltazar R, A.; Vega C, H. R.; Ortiz R, J. M.; Solis S, L. O.; Castaneda M, R.; Soto B, T. G.; Medina C, D.
2017-10-01
In the last three decades the uses of Monte Carlo methods, for the estimation of physical phenomena associated with the interaction of radiation with matter, have increased considerably. The reason is due to the increase in computing capabilities and the reduction of computer prices. Monte Carlo methods allow modeling and simulating real systems before their construction, saving time and costs. The interaction mechanisms between neutrons and matter are diverse and range from elastic dispersion to nuclear fission; to facilitate the neutrons detection, is necessary to moderate them until reaching electronic equilibrium with the medium at standard conditions of pressure and temperature, in this state the total cross section of the 3 He is large. The objective of the present work was to estimate the response matrix of a proportional detector of 3 He using regular volumes of moderator through Monte Carlo methods. Neutron monoenergetic sources with energies of 10 -9 to 20 MeV and polyethylene moderators of different sizes were used. The calculations were made with the MCNP5 code; the number of stories for each detector-moderator combination was large enough to obtain errors less than 1.5%. We found that for small moderators the highest response is obtained for lower energy neutrons, when increasing the moderator dimension we observe that the response decreases for neutrons of lower energy and increases for higher energy neutrons. The total sum of the responses of each moderator allows obtaining a response close to a constant function. (Author)
Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure
Szafran, J.; Juszczyk, K.; Kamiński, M.
2017-12-01
The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.
Directory of Open Access Journals (Sweden)
Ye. S. Sherina
2014-01-01
Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.
Hybrid finite volume/ finite element method for radiative heat transfer in graded index media
Zhang, L.; Zhao, J. M.; Liu, L. H.; Wang, S. Y.
2012-09-01
The rays propagate along curved path determined by the Fermat principle in the graded index medium. The radiative transfer equation in graded index medium (GRTE) contains two specific redistribution terms (with partial derivatives to the angular coordinates) accounting for the effect of the curved ray path. In this paper, the hybrid finite volume with finite element method (hybrid FVM/FEM) (P.J. Coelho, J. Quant. Spectrosc. Radiat. Transf., vol. 93, pp. 89-101, 2005) is extended to solve the radiative heat transfer in two-dimensional absorbing-emitting-scattering graded index media, in which the spatial discretization is carried out using a FVM, while the angular discretization is by a FEM. The FEM angular discretization is demonstrated to be preferable in dealing with the redistribution terms in the GRTE. Two stiff matrix assembly schemes of the angular FEM discretization, namely, the traditional assembly approach and a new spherical assembly approach (assembly on the unit sphere of the solid angular space), are discussed. The spherical assembly scheme is demonstrated to give better results than the traditional assembly approach. The predicted heat flux distributions and temperature distributions in radiative equilibrium are determined by the proposed method and compared with the results available in other references. The proposed hybrid FVM/FEM method can predict the radiative heat transfer in absorbing-emitting-scattering graded index medium with good accuracy.
Hybrid finite volume/ finite element method for radiative heat transfer in graded index media
International Nuclear Information System (INIS)
Zhang, L.; Zhao, J.M.; Liu, L.H.; Wang, S.Y.
2012-01-01
The rays propagate along curved path determined by the Fermat principle in the graded index medium. The radiative transfer equation in graded index medium (GRTE) contains two specific redistribution terms (with partial derivatives to the angular coordinates) accounting for the effect of the curved ray path. In this paper, the hybrid finite volume with finite element method (hybrid FVM/FEM) (P.J. Coelho, J. Quant. Spectrosc. Radiat. Transf., vol. 93, pp. 89-101, 2005) is extended to solve the radiative heat transfer in two-dimensional absorbing-emitting-scattering graded index media, in which the spatial discretization is carried out using a FVM, while the angular discretization is by a FEM. The FEM angular discretization is demonstrated to be preferable in dealing with the redistribution terms in the GRTE. Two stiff matrix assembly schemes of the angular FEM discretization, namely, the traditional assembly approach and a new spherical assembly approach (assembly on the unit sphere of the solid angular space), are discussed. The spherical assembly scheme is demonstrated to give better results than the traditional assembly approach. The predicted heat flux distributions and temperature distributions in radiative equilibrium are determined by the proposed method and compared with the results available in other references. The proposed hybrid FVM/FEM method can predict the radiative heat transfer in absorbing-emitting-scattering graded index medium with good accuracy.
International Nuclear Information System (INIS)
Avezova, N.R.; Avezov, R.R.
2015-01-01
A brand new no-contact method of determining the average working-surface temperature of plate-type radiation-absorbing thermal exchange panels (RATEPs) of flat solar collectors (FSCs) for heating a heat-transfer fluid (HTF) is suggested on the basis of the results of thermal tests in full-scale quasistationary conditions. (authors)
Solution of the square lid-driven cavity flow of a Bingham plastic using the finite volume method
Syrakos, Alexandros; Georgiou, Georgios C.; Alexandrou, Andreas N.
2016-01-01
We investigate the performance of the finite volume method in solving viscoplastic flows. The creeping square lid-driven cavity flow of a Bingham plastic is chosen as the test case and the constitutive equation is regularised as proposed by Papanastasiou [J. Rheol. 31 (1987) 385-404]. It is shown that the convergence rate of the standard SIMPLE pressure-correction algorithm, which is used to solve the algebraic equation system that is produced by the finite volume discretisation, severely det...
International Nuclear Information System (INIS)
Wertz, D.L.; Bissell, M.
1994-01-01
X-ray characterizations of coals and coal products have occurred for many years. Hirsch and Cartz measured the diffraction from several coals over the reciprocal space region from s = 0.12 angstrom -1 to 7.5 angstrom -1 where s = (4π/λ) sinΘ. In these studies, a 9 cm powder camera was used to study the high angle region, and a transmission type focusing camera equipped with a LiF monochromator was used for the low angle measurements. They reported that the height of the graphene peak measured for each coal is proportional to the % carbon in the coals. Hirsch also suggested that the ontyberem anthracite has a lamellar diameter of ca. 16 angstrom corresponding to an aromatic lamellae of ca. C 87 . For coals with lower carbon content, Hirsch proposed much smaller lamellae; C 19 for a coal with 80% carbon, and C 24 for a coal with 89% carbon. The subject coal for this study is a meta-anthracite which was derived from the Portsmouth, RI mine. The Narragansett Basin contains anthracite and meta-anthracite coals of Pennsylvanian Age. The Basin was a techtonically active non-marine coal-forming basin which has been impacted by several tectonic events. Because of the importance placed by coal scientists no correctly characterizing the nature of the micro-level structural cluster(s) in coals and because of improvements in both x-ray experimentation capabilities and computing power, we have measured the x-ray diffraction and scattering produced from irradiation of this meta-anthracite coal which contains about 94% aromatic carbon. The goal of our study is to determine the intra-planar, and where possible, inter-planar structural details of coals. To accomplish this goal we have utilized the methods normally used for the molecular analysis of non-crystalline condensed phases such as liquids, solutions, and amorphous solids. Reported herein are the results obtained from the high angle x-ray analysis of this coal
International Nuclear Information System (INIS)
Krebs, W.; Erbel, R.; Schweizer, P.; Richter, H.A.; Massberg, I.; Meyer, J.; Effert, S.; Henn, G.
1982-01-01
The irregularity and complexity of the right ventricle is the reason why no accurate method for right ventricular volume determination exists. A new method for right ventricular volume determination particularly for two-dimensional echocardiography was developed - it is called subtraction method - and was compared with the pyramid and Simpson's methods. The partial volume of the left ventricle and septum was subtracted from total volume of right and left ventricle including interventricular septum. Thus right ventricular volume resulted. Total and partial volume were computer-assisted calculated by use of biplane methods, preferably Simpson's rule. The method was proved with thinwall silicon-rubber model hearts of the left and right ventricle. Two orthogonal planes in the long-axis were filmed by radiography or scanned in a water bath by two-dimensional echocardiography equivalent to RAO and LAO-projections of cineangiocardiograms or to four- and two-chamber views of apical two-dimensional echocardiograms. For calculation of the major axes of the elliptical sections, summed up by Simpson's rule, they were derived from the LAO-projection and the four-chamber view, respectively, the minor axis approximated from the RAO-projection and the two-chamber view. For comparison of direct-measured volume and two-dimensional echocardiographically determined volume, regression equation was given by y = 1.01 x - 3.2, correlation-coefficient, r = 0.977, and standard error of estimate (SEE) +-10.5 ml. For radiography, regression equation was y = 0.909 x + 13.3, r = 0.983, SEE = +-8.0 ml. For pyramid method and Simpson's rule, higher standard errors and lower correlation coefficients were found. Between radiography and two-dimensional echocardiography a mean difference of 4.3 +- 13.2 ml, using subtraction method, and -10.2 +- 22.9 ml, using pyramid method, as well as -0.6 +- 18.5 ml, using Simpson's rule, were calculated for right ventricular volume measurements. (orig./APR) [de
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Francisco [Federal University of Ceara, Fortaleza (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br; Varavei, Abdoljalil; Sepehrnoori, Kamy [The University of Texas at Austin (United States). Petroleum and Geosystems Engineering Dept.], e-mails: varavei@mail.utexas.edu, kamys@mail.utexas.edu
2010-07-01
An element-based finite-volume approach in conjunction with unstructured grids for naturally fractured compositional reservoir simulation is presented. In this approach, both the discrete fracture and the matrix mass balances are taken into account without any additional models to couple the matrix and discrete fractures. The mesh, for two dimensional domains, can be built of triangles, quadrilaterals, or a mix of these elements. However, due to the available mesh generator to handle both matrix and discrete fractures, only results using triangular elements will be presented. The discrete fractures are located along the edges of each element. To obtain the approximated matrix equation, each element is divided into three sub-elements and then the mass balance equations for each component are integrated along each interface of the sub-elements. The finite-volume conservation equations are assembled from the contribution of all the elements that share a vertex, creating a cell vertex approach. The discrete fracture equations are discretized only along the edges of each element and then summed up with the matrix equations in order to obtain a conservative equation for both matrix and discrete fractures. In order to mimic real field simulations, the capillary pressure is included in both matrix and discrete fracture media. In the implemented model, the saturation field in the matrix and discrete fractures can be different, but the potential of each phase in the matrix and discrete fracture interface needs to be the same. The results for several naturally fractured reservoirs are presented to demonstrate the applicability of the method. (author)
Methods for quantitative measurement of tooth wear using the area and volume of virtual model cusps.
Kim, Soo-Hyun; Park, Young-Seok; Kim, Min-Kyoung; Kim, Sulhee; Lee, Seung-Pyo
2018-04-01
Clinicians must examine tooth wear to make a proper diagnosis. However, qualitative methods of measuring tooth wear have many disadvantages. Therefore, this study aimed to develop and evaluate quantitative parameters using the cusp area and volume of virtual dental models. The subjects of this study were the same virtual models that were used in our former study. The same age group classification and new tooth wear index (NTWI) scoring system were also reused. A virtual occlusal plane was generated with the highest cusp points and lowered vertically from 0.2 to 0.8 mm to create offset planes. The area and volume of each cusp was then measured and added together. In addition to the former analysis, the differential features of each cusp were analyzed. The scores of the new parameters differentiated the age and NTWI groups better than those analyzed in the former study. The Spearman ρ coefficients between the total area and the area of each cusp also showed higher scores at the levels of 0.6 mm (0.6A) and 0.8A. The mesiolingual cusp (MLC) showed a statistically significant difference ( P <0.01) from the other cusps in the paired t -test. Additionally, the MLC exhibited the highest percentage of change at 0.6A in some age and NTWI groups. Regarding the age groups, the MLC showed the highest score in groups 1 and 2. For the NTWI groups, the MLC was not significantly different in groups 3 and 4. These results support the proposal that the lingual cusp exhibits rapid wear because it serves as a functional cusp. Although this study has limitations due to its cross-sectional nature, it suggests better quantitative parameters and analytical tools for the characteristics of cusp wear.
Evaluation of the reconstruction method and effect of partial volume in brain scintiscanning
International Nuclear Information System (INIS)
Pinheiro, Monica Araujo
2016-01-01
Alzheimer's disease is a neurodegenerative disorder, on which occurs a progressive and irreversible destruction of neurons. According to the World Health Organization (WHO) 35.6 million people are living with dementia, being recommended that governments prioritize early diagnosis techniques. Laboratory and psychological tests for cognitive assessment are conducted and further complemented by neurological imaging from nuclear medicine exams in order to establish an accurate diagnosis. The image quality evaluation and reconstruction process effects are important tools in clinical routine. In the present work, these quality parameters were studied, and the effects of partial volume (PVE) for lesions of different sizes and geometries that are attributed to the limited resolution of the equipment. In dementia diagnosis, this effect can be confused with intake losses due to cerebral cortex atrophy. The evaluation was conducted by two phantoms of different shapes as suggested by (a) American College of Radiology (ACR) and (b) National Electrical Manufacturers Association (NEMA) for Contrast, Contrast-to-Noise Ratio (CNR) and Recovery Coefficient (RC) calculation versus lesions shape and size. Technetium-99m radionuclide was used in a local brain scintigraphy protocol, for proportions lesion to background of 2:1, 4:1, 6:1, 8:1 and 10:1. Fourteen reconstruction methods were used for each concentration applying different filters and algorithms. Before the analysis of all image properties, the conclusion is that the predominant effect is the partial volume, leading to errors of measurement of more than 80%. Furthermore, it was demonstrate that the most effective method of reconstruction is FBP with Metz filter, providing better contrast and contrast to noise ratio results. In addition, this method shows the best Recovery Coefficients correction for each lesion. The ACR phantom showed the best results assigned to a more precise reconstruction of a cylinder, which does not
de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.
de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,
A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.
Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R
2017-07-01
The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Broadhead, B.L.; Hopper, C.M.; Childs, R.L.; Parks, C.V.
1999-01-01
This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the available S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently used by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The S/U methods that are presented in this volume are designed to provide a formal means of establishing the range (or area) of applicability for criticality safety data validation studies. The development of parameters that are analogous to the standard trending parameters forms the key to the technique. These parameters are the D parameters, which represent the differences by group of sensitivity profiles, and the ck parameters, which are the correlation coefficients for the calculational uncertainties between systems; each set of parameters gives information relative to the similarity between pairs of selected systems, e.g., a critical experiment and a specific real-world system (the application)
International Nuclear Information System (INIS)
Onishi, Yuki; Takiyasu, Jumpei; Amaya, Kenji; Yakuwa, Hiroshi; Hayabusa, Keisuke
2012-01-01
Highlights: ► A novel numerical method to analyze time dependent localized corrosion is developed. ► It takes electromigration, mass diffusion, chemical reactions, and moving boundaries. ► Our method perfectly satisfies the conservation of mass and electroneutrality. ► The behavior of typical crevice corrosion is successfully simulated. ► Both verification and validation of our method are carried out. - Abstract: A novel numerical method for time-dependent localized corrosion analysis is presented. Electromigration, mass diffusion, chemical reactions, and moving boundaries are considered in the numerical simulation of localized corrosion of engineering alloys in an underwater environment. Our method combines the finite volume method (FVM) and the voxel method. The FVM is adopted in the corrosion rate calculation so that the conservation of mass is satisfied. A newly developed decoupled algorithm with a projection method is introduced in the FVM to decouple the multiphysics problem into the electrostatic, mass transport, and chemical reaction analyses with electroneutrality maintained. The polarization curves for the corroding metal are used as boundary conditions for the metal surfaces to calculate the corrosion rates. The voxel method is adopted in updating the moving boundaries of cavities without remeshing and mesh-to-mesh solution mapping. Some modifications of the standard voxel method, which represents the boundaries as zigzag-shaped surfaces, are introduced to generate smooth surfaces. Our method successfully reproduces the numerical and experimental results of a capillary electrophoresis problem. Furthermore, the numerical results are qualitatively consistent with the experimental results for several examples of crevice corrosion.
Directory of Open Access Journals (Sweden)
Camille Roussel
2018-05-01
Full Text Available Red blood cells (RBC ability to circulate is closely related to their surface area-to-volume ratio. A decrease in this ratio induces a decrease in RBC deformability that can lead to their retention and elimination in the spleen. We recently showed that a subpopulation of “small RBC” with reduced projected surface area accumulated upon storage in blood bank concentrates, but data on the volume of these altered RBC are lacking. So far, single cell measurement of RBC volume has remained a challenging task achieved by a few sophisticated methods some being subject to potential artifacts. We aimed to develop a reproducible and ergonomic method to assess simultaneously RBC volume and morphology at the single cell level. We adapted the fluorescence exclusion measurement of volume in nucleated cells to the measurement of RBC volume. This method requires no pre-treatment of the cell and can be performed in physiological or experimental buffer. In addition to RBC volume assessment, brightfield images enabling a precise definition of the morphology and the measurement of projected surface area can be generated simultaneously. We first verified that fluorescence exclusion is precise, reproducible and can quantify volume modifications following morphological changes induced by heating or incubation in non-physiological medium. We then used the method to characterize RBC stored for 42 days in SAG-M in blood bank conditions. Simultaneous determination of the volume, projected surface area and morphology allowed to evaluate the surface area-to-volume ratio of individual RBC upon storage. We observed a similar surface area-to-volume ratio in discocytes (D and echinocytes I (EI, which decreased in EII (7% and EIII (24%, sphero-echinocytes (SE; 41% and spherocytes (S; 47%. If RBC dimensions determine indeed the ability of RBC to cross the spleen, these modifications are expected to induce the rapid splenic entrapment of the most morphologically altered RBC
Numerical Methods in Atmospheric and Oceanic Modelling: The Andre J. Robert Memorial Volume
Rosmond, Tom
Most people, even including some in the scientific community, do not realize how much the weather forecasts they use to guide the activities of their daily lives depend on very complex mathematics and numerical methods that are the basis of modern numerical weather prediction (NWP). André Robert (1929-1993), to whom Numerical Methods in Atmospheric and Oceanic Modelling is dedicated, had a career that contributed greatly to the growth of NWP and the role that the atmospheric computer models of NWP play in our society. There are probably no NWP models running anywhere in the world today that do not use numerical methods introduced by Robert, and those of us who work with and use these models everyday are indebted to him.The first two chapters of the volume are chronicles of Robert's life and career. The first is a 1987 interview by Harold Ritchie, one of Robert's many proteges and colleagues at the Canadian Atmospheric Environment Service. The interview traces Robert's life from his birth in New York to French Canadian parents, to his emigration to Quebec at an early age, his education and early employment, and his rise in stature as one of the preeminent research meteorologists of our time. An amusing anecdote he relates is his impression of weather forecasts while he was considering his first job as a meteorologist in the early 1950s. A newspaper of the time placed the weather forecast and daily horoscope side by side, and Robert regarded each to have a similar scientific basis. Thankfully he soon realized there was a difference between the two, and his subsequent career certainly confirmed the distinction.
Energy Technology Data Exchange (ETDEWEB)
Carreira, M
1965-07-01
As a working method for determination of changes in molecular mass that may occur by irradiation (pyrolytic-radiolytic decomposition) of polyphenyl reactor coolants, a cryoscopic technique has been developed which associated the basic simplicity of Beckman's method with some experimental refinements taken out of the equilibrium methods. A total of 18 runs were made on samples of napthalene, biphenyl, and the commercial mixtures OM-2 (Progil) and Santowax-R (Monsanto), with an average deviation from the theoretical molecular mass of 0.6%. (Author) 7 refs.
Energy Technology Data Exchange (ETDEWEB)
Carreira, M.
1965-07-01
As a working method for determination of changes in molecular mass that may occur by irradiation (pyrolytic-radiolytic decomposition) of polyphenyl reactor coolants, a cryoscopic technique has been developed which associated the basic simplicity of Beckman's method with some experimental refinements taken out of the equilibrium methods. A total of 18 runs were made on samples of napthalene, biphenyl, and the commercial mixtures OM-2 (Progil) and Santowax-R (Monsanto), with an average deviation from the theoretical molecular mass of 0.6%. (Author) 7 refs.
International Nuclear Information System (INIS)
Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.
2009-01-01
In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs
Energy Technology Data Exchange (ETDEWEB)
Tae, Woo Suk; Lee, Kang Uk; Nam, Eui-Cheol; Kim, Keun Woo [Kangwon National University College of Medicine, Neuroscience Research Institute, Kangwon (Korea); Kim, Sam Soo [Kangwon National University College of Medicine, Neuroscience Research Institute, Kangwon (Korea); Kangwon National University Hospital, Department of Radiology, Kangwon-do (Korea)
2008-07-15
To validate the usefulness of the packages available for automated hippocampal volumetry, we measured hippocampal volumes using one manual and two recently developed automated volumetric methods. The study included T1-weighted magnetic resonance imaging (MRI) of 21 patients with chronic major depressive disorder (MDD) and 20 normal controls. Using coronal turbo field echo (TFE) MRI with a slice thickness of 1.3 mm, the hippocampal volumes were measured using three methods: manual volumetry, surface-based parcellation using FreeSurfer, and individual atlas-based volumetry using IBASPM. In addition, the intracranial cavity volume (ICV) was measured manually. The absolute left hippocampal volume of the patients with MDD measured using all three methods was significantly smaller than the left hippocampal volume of the normal controls (manual P=0.029, FreeSurfer P=0.035, IBASPM P=0.018). After controlling for the ICV, except for the right hippocampal volume measured using FreeSurfer, both measured hippocampal volumes of the patients with MDD were significantly smaller than the measured hippocampal volumes of the normal controls (right manual P=0.019, IBASPM P=0.012; left manual P=0.003, FreeSurfer P=0.010, IBASPM P=0.002). In the intrarater reliability test, the intraclass correlation coefficients (ICCs) were all excellent (manual right 0.947, left 0.934; FreeSurfer right 1.000, left 1.000; IBASPM right 1.000, left 1.000). In the test of agreement between the volumetric methods, the ICCs were right 0.846 and left 0.848 (manual and FreeSurfer), and right 0.654 and left 0.717 (manual and IBASPM). The automated hippocampal volumetric methods showed good agreement with manual hippocampal volumetry, but the volume measured using FreeSurfer was 35% larger and the agreement was questionable with IBASPM. Although the automated methods could detect hippocampal atrophy in the patients with MDD, the results indicate that manual hippocampal volumetry is still the gold standard
International Nuclear Information System (INIS)
Tae, Woo Suk; Lee, Kang Uk; Nam, Eui-Cheol; Kim, Keun Woo; Kim, Sam Soo
2008-01-01
To validate the usefulness of the packages available for automated hippocampal volumetry, we measured hippocampal volumes using one manual and two recently developed automated volumetric methods. The study included T1-weighted magnetic resonance imaging (MRI) of 21 patients with chronic major depressive disorder (MDD) and 20 normal controls. Using coronal turbo field echo (TFE) MRI with a slice thickness of 1.3 mm, the hippocampal volumes were measured using three methods: manual volumetry, surface-based parcellation using FreeSurfer, and individual atlas-based volumetry using IBASPM. In addition, the intracranial cavity volume (ICV) was measured manually. The absolute left hippocampal volume of the patients with MDD measured using all three methods was significantly smaller than the left hippocampal volume of the normal controls (manual P=0.029, FreeSurfer P=0.035, IBASPM P=0.018). After controlling for the ICV, except for the right hippocampal volume measured using FreeSurfer, both measured hippocampal volumes of the patients with MDD were significantly smaller than the measured hippocampal volumes of the normal controls (right manual P=0.019, IBASPM P=0.012; left manual P=0.003, FreeSurfer P=0.010, IBASPM P=0.002). In the intrarater reliability test, the intraclass correlation coefficients (ICCs) were all excellent (manual right 0.947, left 0.934; FreeSurfer right 1.000, left 1.000; IBASPM right 1.000, left 1.000). In the test of agreement between the volumetric methods, the ICCs were right 0.846 and left 0.848 (manual and FreeSurfer), and right 0.654 and left 0.717 (manual and IBASPM). The automated hippocampal volumetric methods showed good agreement with manual hippocampal volumetry, but the volume measured using FreeSurfer was 35% larger and the agreement was questionable with IBASPM. Although the automated methods could detect hippocampal atrophy in the patients with MDD, the results indicate that manual hippocampal volumetry is still the gold standard
29 CFR 779.342 - Methods of computing annual volume of sales.
2010-07-01
... STANDARDS ACT AS APPLIED TO RETAILERS OF GOODS OR SERVICES Exemptions for Certain Retail or Service...) of the Act are specified in terms of the “annual dollar volume of sales” of goods or of services (or... annual dollar volume before deduction of those taxes which are excluded in determining whether the $250...
Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.
2000-06-01
The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.
A method of estimating inspiratory flow rate and volume from an inhaler using acoustic measurements
International Nuclear Information System (INIS)
Holmes, Martin S; D'Arcy, Shona; O'Brien, Ultan; Reilly, Richard B; Seheult, Jansen N; Geraghty, Colm; Costello, Richard W; Crispino O'Connell, Gloria
2013-01-01
Inhalers are devices employed to deliver medication to the airways in the treatment of respiratory diseases such as asthma and chronic obstructive pulmonary disease. A dry powder inhaler (DPI) is a breath actuated inhaler that delivers medication in dry powder form. When used correctly, DPIs improve patients' clinical outcomes. However, some patients are unable to reach the peak inspiratory flow rate (PIFR) necessary to fully extract the medication. Presently clinicians have no reliable method of objectively measuring PIFR in inhalers. In this study, we propose a novel method of estimating PIFR and also the inspiratory capacity (IC) of patients' inhalations from a commonly used DPI, using acoustic measurements. With a recording device, the acoustic signal of 15 healthy subjects using a DPI over a range of varying PIFR and IC values was obtained. Temporal and spectral signal analysis revealed that the inhalation signal contains sufficient information that can be employed to estimate PIFR and IC. It was found that the average power (P ave ) in the frequency band 300–600 Hz had the strongest correlation with PIFR (R 2 = 0.9079), while the power in the same frequency band was also highly correlated with IC (R 2 = 0.9245). This study has several clinical implications as it demonstrates the feasibility of using acoustics to objectively monitor inhaler use. (paper)
Soltanmoradi, Elmira; Shokri, Babak
2017-05-01
In this article, the electromagnetic wave scattering from plasma columns with inhomogeneous electron density distribution is studied by the Green's function volume integral equation method. Due to the ready production of such plasmas in the laboratories and their practical application in various technological fields, this study tries to find the effects of plasma parameters such as the electron density, radius, and pressure on the scattering cross-section of a plasma column. Moreover, the incident wave frequency influence of the scattering pattern is demonstrated. Furthermore, the scattering cross-section of a plasma column with an inhomogeneous collision frequency profile is calculated and the effect of this inhomogeneity is discussed first in this article. These results are especially used to determine the appropriate conditions for radar cross-section reduction purposes. It is shown that the radar cross-section of a plasma column reduces more for a larger collision frequency, for a relatively lower plasma frequency, and also for a smaller radius. Furthermore, it is found that the effect of the electron density on the scattering cross-section is more obvious in comparison with the effect of other plasma parameters. Also, the plasma column with homogenous collision frequency can be used as a better shielding in contrast to its inhomogeneous counterpart.
Two-dimensional transient thermal analysis of a fuel rod by finite volume method
Energy Technology Data Exchange (ETDEWEB)
Costa, Rhayanne Yalle Negreiros; Silva, Mário Augusto Bezerra da; Lira, Carlos Alberto de Oliveira, E-mail: ryncosta@gmail.com, E-mail: mabs500@gmail.com, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear
2017-07-01
One of the greatest concerns when studying a nuclear reactor is the warranty of safe temperature limits all over the system at all time. The preservation of core structure along with the constraint of radioactive material into a controlled system are the main focus during the operation of a reactor. The purpose of this paper is to present the temperature distribution for a nominal channel of the AP1000 reactor developed by Westinghouse Co. during steady-state and transient operations. In the analysis, the system was subjected to normal operation conditions and then to blockages of the coolant flow. The time necessary to achieve a new safe stationary stage (when it was possible) was presented. The methodology applied in this analysis was based on a two-dimensional survey accomplished by the application of Finite Volume Method (FVM). A steady solution is obtained and compared with an analytical analysis that disregard axial heat transport to determine its relevance. The results show the importance of axial heat transport consideration in this type of study. A transient analysis shows the behavior of the system when submitted to coolant blockage at channel's entrance. Three blockages were simulated (10%, 20% and 30%) and the results show that, for a nominal channel, the system can still be considerate safe (there's no bubble formation until that point). (author)
A METHOD OF RAPID CULTIVATION OF RADISH SEED PLANTS IN PLASTIC POTS OF SMALL-VOLUME
Directory of Open Access Journals (Sweden)
V. A. Stepanov
2017-01-01
Full Text Available The development of cheap and rapid breeding methods to breed the lines used for hybrid F1 production is a very actual task. The study was carried out with a use of radish varieties originated at VNIISSOK and breeding lines obtained by crossing components of different origin with male sterility in winter glass greenhouse. The mother plants were grown on the trays Plantec 64, while seedplants were grown in plastic pots of 1 liter capacity. The some morphobiological features such as the small habitus of see-plant; smaller number of secondary branching and absence of following branches; and consequently, the low yield of seeds were revealed in seed-plants of radish being grown in plastic pots. The period of ontogenesis in radish at first winter-spring rotation with this cultivation approach was reduced to 92 days. At the second summer-autumn rotation with additional lighting the duration of period of ontogenesis was essentially shorter than in the first rotation. The utilization of small-volume capacities in winter glass greenhouse to grow the radish seed-plants has permitted to produce two generations a year.
International Nuclear Information System (INIS)
Berthe, P.M.
2013-01-01
In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr
Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting
Ingram, G. J.
Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.
A multi-step dealloying method to produce nanoporous gold with no volume change and minimal cracking
Energy Technology Data Exchange (ETDEWEB)
Sun Ye [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States); Balk, T. John [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States)], E-mail: balk@engr.uky.edu
2008-05-15
We report a simple two-step dealloying method for producing bulk nanoporous gold with no volume change and no significant cracking. The galvanostatic dealloying method used here appears superior to potentiostatic methods for fabricating millimeter-scale samples. Care must be taken when imaging the nanoscale, interconnected sponge-like structure with a focused ion beam, as even brief exposure caused immediate and extensive cracking of nanoporous gold, as well as ligament coarsening at the surface00.
Energy Technology Data Exchange (ETDEWEB)
Śpiewak, Mateusz, E-mail: mspiewak@ikard.pl [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Małek, Łukasz A., E-mail: lmalek@ikard.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Interventional Cardiology and Angiology, Institute of Cardiology, Warsaw (Poland); Petryka, Joanna, E-mail: joannapetryka@hotmail.com [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Mazurkiewicz, Łukasz, E-mail: lmazurkiewicz@ikard.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Cardiomyopathy, Institute of Cardiology, Warsaw (Poland); Miłosz, Barbara, E-mail: barbara-milosz@o2.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Radiology, Institute of Cardiology, Warsaw (Poland); Biernacka, Elżbieta K., E-mail: kbiernacka@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Kowalski, Mirosław, E-mail: mkowalski@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Hoffman, Piotr, E-mail: phoffman@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Demkow, Marcin, E-mail: mdemkow@ikard.pl [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Miśko, Jolanta, E-mail: jmisko@wp.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Radiology, Institute of Cardiology, Warsaw (Poland); Rużyłło, Witold, E-mail: wruzyllo@ikard.pl [Institute of Cardiology, Warsaw (Poland)
2012-10-15
Background: Previous studies have advocated quantifying pulmonary regurgitation (PR) by using PR volume (PRV) instead of commonly used PR fraction (PRF). However, physicians are not familiar with the use of PRV in clinical practice. The ratio of right ventricle (RV) volume to left ventricle volume (RV/LV) may better reflect the impact of PR on the heart than RV end-diastolic volume (RVEDV) alone. We aimed to compare the impact of PRV and PRF on RV size expressed as either the RV/LV ratio or RVEDV (mL/m{sup 2}). Methods: Consecutive patients with repaired tetralogy of Fallot were included (n = 53). PRV, PRF and ventricular volumes were measured with the use of cardiac magnetic resonance. Results: RVEDV was more closely correlated with PRV when compared with PRF (r = 0.686, p < 0.0001, and r = 0.430, p = 0.0014, respectively). On the other hand, both PRV and PRF showed a good correlation with the RV/LV ratio (r = 0.691, p < 0.0001, and r = 0.685, p < 0.0001, respectively). Receiver operating characteristic analysis showed that both measures of PR had similar ability to predict severe RV dilatation when the RV/LV ratio-based criterion was used, namely the RV/LV ratio > 2.0 [area under the curve (AUC){sub PRV} = 0.770 vs AUC{sub PRF} = 0.777, p = 0.86]. Conversely, with the use of the RVEDV-based criterion (>170 mL/m{sup 2}), PRV proved to be superior over PRF (AUC{sub PRV} = 0.770 vs AUC{sub PRF} = 0.656, p = 0.0028]. Conclusions: PRV and PRF have similar significance as measures of PR when the RV/LV ratio is used instead of RVEDV. The RV/LV ratio is a universal marker of RV dilatation independent of the method of PR quantification applied (PRF vs PRV)
Systems and methods for the detection of low-level harmful substances in a large volume of fluid
Carpenter, Michael V.; Roybal, Lyle G.; Lindquist, Alan; Gallardo, Vincente
2016-03-15
A method and device for the detection of low-level harmful substances in a large volume of fluid comprising using a concentrator system to produce a retentate and analyzing the retentate for the presence of at least one harmful substance. The concentrator system performs a method comprising pumping at least 10 liters of fluid from a sample source through a filter. While pumping, the concentrator system diverts retentate from the filter into a container. The concentrator system also recirculates at least part of the retentate in the container again through the filter. The concentrator system controls the speed of the pump with a control system thereby maintaining a fluid pressure less than 25 psi during the pumping of the fluid; monitors the quantity of retentate within the container with a control system, and maintains a reduced volume level of retentate and a target volume of retentate.
International Nuclear Information System (INIS)
Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.
1986-11-01
COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods
Directory of Open Access Journals (Sweden)
Chuang Nie
2015-01-01
Full Text Available Background: In vivo quantification of choroidal neovascularization (CNV based on noninvasive optical coherence tomography (OCT examination and in vitro choroidal flatmount immunohistochemistry stained of CNV currently were used to evaluate the process and severity of age-related macular degeneration (AMD both in human and animal studies. This study aimed to investigate the correlation between these two methods in murine CNV models induced by subretinal injection. Methods: CNV was developed in 20 C57BL6/j mice by subretinal injection of adeno-associated viral delivery of a short hairpin RNA targeting sFLT-1 (AAV.shRNA.sFLT-1, as reported previously. After 4 weeks, CNV was imaged by OCT and fluorescence angiography. The scaling factors for each dimension, x, y, and z (μm/pixel were recorded, and the corneal curvature standard was adjusted from human (7.7 to mice (1.4. The volume of each OCT image stack was calculated and then normalized by multiplying the number of voxels by the scaling factors for each dimension in Seg3D software (University of Utah Scientific Computing and Imaging Institute, available at http://www.sci.utah.edu/cibc-software/seg3d.html. Eighteen mice were prepared for choroidal flatmounts and stained by CD31. The CNV volumes were calculated using scanning laser confocal microscopy after immunohistochemistry staining. Two mice were stained by Hematoxylin and Eosin for observing the CNV morphology. Results: The CNV volume calculated using OCT was, on average, 2.6 times larger than the volume calculated using the laser confocal microscopy. The correlation statistical analysis showed OCT measuring of CNV correlated significantly with the in vitro method (R 2 =0.448, P = 0.001, n = 18. The correlation coefficient for CNV quantification using OCT and confocal microscopy was 0.693 (n = 18, P = 0.001. Conclusions: There is a fair linear correlation on CNV volumes between in vivo and in vitro methods in CNV models induced by subretinal
International Nuclear Information System (INIS)
Ou, Ming-Ching; Chuang, Ming-Tsung; Lin, Xi-Zhang; Tsai, Hong-Ming; Chen, Shu-Yuan; Liu, Yi-Sheng
2013-01-01
Purpose: To evaluate the efficacy of estimating the volume of spleen embolized in partial splenic embolization (PSE) by measuring the diameters of the splenic artery and its branches. Materials and methods: A total of 43 liver cirrhosis patients (mean age, 62.19 ± 9.65 years) with thrombocytopenia were included. Among these, 24 patients underwent a follow-up CT scan which showed a correlation between angiographic estimation and measured embolized splenic volume. Estimated splenic embolization volume was calculated by a method based on diameters of the splenic artery and its branches. The diameters of each of the splenic arteries and branches were measured via 2D angiographic images. Embolization was performed with gelatin sponges. Patients underwent follow-up with serial measurement of blood counts and liver function tests. The actual volume of embolized spleen was determined by computed tomography (CT) measuring the volumes of embolized and non-embolized spleen two months after PSE. Results: PSE was performed without immediate major complications. The mean WBC count significantly increased from 3.81 ± 1.69 × 10 3 /mm 3 before PSE to 8.56 ± 3.14 × 10 3 /mm 3 at 1 week after PSE (P < 0.001). Mean platelet count significantly increased from 62.00 ± 22.62 × 10 3 /mm 3 before PSE to 95.40 ± 46.29 × 10 3 /mm 3 1 week after PSE (P < 0.001). The measured embolization ratio was positively correlated with estimated embolization ratio (Spearman's rho [ρ] = 0.687, P < 0.001). The mean difference between the actual embolization ratio and the estimated embolization ratio was 16.16 ± 8.96%. Conclusions: The method provides a simple method to quantitatively estimate embolized splenic volume with a correlation of measured embolization ratio to estimated embolization ratio of Spearman's ρ = 0.687
An innovative method of planning and displaying flap volume in DIEP flap breast reconstructions
Hummelink, S.L.; Verhulst, A.C.; Maal, T.J.J.; Hoogeveen, Y.L.; Schultze Kool, L.J.; Ulrich, D.J.O.
2017-01-01
BACKGROUND: Determining the ideal volume of the harvested flap to achieve symmetry in deep inferior epigastric artery perforator (DIEP) flap breast reconstructions is complex. With preoperative imaging techniques such as 3D stereophotogrammetry and computed tomography angiography (CTA) available
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...
Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.
2016-02-01
The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.
DEFF Research Database (Denmark)
Troldborg, Niels; Sørensen, Niels N.; Réthoré, Pierre-Elouan
2015-01-01
This paper describes a consistent algorithm for eliminating the numerical wiggles appearing when solving the finite volume discretized Navier-Stokes equations with discrete body forces in a collocated grid arrangement. The proposed method is a modification of the Rhie-Chow algorithm where the for...
International Nuclear Information System (INIS)
Vidovic, D.; Segal, A.; Wesseling, P.
2004-01-01
A method for linear reconstruction of staggered vector fields with special treatment of the divergence is presented. An upwind-biased finite volume scheme for solving the unsteady incompressible Navier-Stokes equations on staggered unstructured triangular grids that uses this reconstruction is described. The scheme is applied to three benchmark problems and is found to be superlinearly convergent in space
Energy Technology Data Exchange (ETDEWEB)
Douglas, D.G.; Wise, R.F.; Starr, J.W.; Maresca, J.W. Jr. [Vista Research, Inc., Mountain View, CA (United States)
1994-11-01
This document, the Leak Testing Plan for the Oak Ridge National Laboratory Liquid Low-Level Waste System (Active Tanks), comprises three volumes. The first two volumes address the component-based leak testing plan for the liquid low-level waste system at Oak Ridge, while the third volume describes the performance evaluation of the leak detection method that will be used to test this system. Volume 1, describes that portion of the liquid low-level waste system at that will be tested; it provides the regulatory background, especially in terms of the requirements stipulated in the Federal Facilities Agreement, upon which the leak testing plan is based. Volume 1 also describes the foundation of the plan, portions of which were abstracted from existing federal documents that regulate the petroleum and hazardous chemicals industries. Finally, Volume 1 gives an overview the plan, describing the methods that will be used to test the four classes of components in the liquid low-level waste system. Volume 2 takes the general information on component classes and leak detection methods presented in Volume 1 and shows how it applies particularly to each of the individual components. A complete test plan for each of the components is presented, with emphasis placed on the methods designated for testing tanks. The protocol for testing tank systems is described, and general leak testing schedules are presented. Volume 3 describes the results of a performance evaluation completed for the leak testing method that will be used to test the small tanks at the facility (those less than 3,000 gal in capacity). Some of the details described in Volumes 1 and 2 are expected to change as additional information is obtained, as the viability of candidate release detection methods is proven in the Oak Ridge environment, and as the testing program evolves.
International Nuclear Information System (INIS)
Douglas, D.G.; Wise, R.F.; Starr, J.W.; Maresca, J.W. Jr.
1994-11-01
This document, the Leak Testing Plan for the Oak Ridge National Laboratory Liquid Low-Level Waste System (Active Tanks), comprises three volumes. The first two volumes address the component-based leak testing plan for the liquid low-level waste system at Oak Ridge, while the third volume describes the performance evaluation of the leak detection method that will be used to test this system. Volume 1, describes that portion of the liquid low-level waste system at that will be tested; it provides the regulatory background, especially in terms of the requirements stipulated in the Federal Facilities Agreement, upon which the leak testing plan is based. Volume 1 also describes the foundation of the plan, portions of which were abstracted from existing federal documents that regulate the petroleum and hazardous chemicals industries. Finally, Volume 1 gives an overview the plan, describing the methods that will be used to test the four classes of components in the liquid low-level waste system. Volume 2 takes the general information on component classes and leak detection methods presented in Volume 1 and shows how it applies particularly to each of the individual components. A complete test plan for each of the components is presented, with emphasis placed on the methods designated for testing tanks. The protocol for testing tank systems is described, and general leak testing schedules are presented. Volume 3 describes the results of a performance evaluation completed for the leak testing method that will be used to test the small tanks at the facility (those less than 3,000 gal in capacity). Some of the details described in Volumes 1 and 2 are expected to change as additional information is obtained, as the viability of candidate release detection methods is proven in the Oak Ridge environment, and as the testing program evolves
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe
2015-01-01
Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...
International Nuclear Information System (INIS)
Kawano, Takao; Ebihara, Hiroshi
1990-01-01
The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)
International Nuclear Information System (INIS)
Núñez, Jóse; Ramos, Eduardo; Lopez, Juan M
2012-01-01
We describe a hybrid method based on the combined use of the Fourier Galerkin and finite-volume techniques to solve the fluid dynamics equations in cylindrical geometries. A Fourier expansion is used in the angular direction, partially translating the problem to the Fourier space and then solving the resulting equations using a finite-volume technique. We also describe an algorithm required to solve the coupled mass and momentum conservation equations similar to a pressure-correction SIMPLE method that is adapted for the present formulation. Using the Fourier–Galerkin method for the azimuthal direction has two advantages. Firstly, it has a high-order approximation of the partial derivatives in the angular direction, and secondly, it naturally satisfies the azimuthal periodic boundary conditions. Also, using the finite-volume method in the r and z directions allows one to handle boundary conditions with discontinuities in those directions. It is important to remark that with this method, the resulting linear system of equations are band-diagonal, leading to fast and efficient solvers. The benefits of the mixed method are illustrated with example problems. (paper)
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
International Nuclear Information System (INIS)
Copeland, G.L.; Heestand, R.L.; Mateer, R.S.
1978-06-01
A review of the literature and prior experience led to selection of induction melting as the most promising method for volume reduction of low-level transuranic contaminated metal waste. The literature indicates that melting with the appropriate slags significantly lowers the total contamination level of the metals by preferentially concentrating contaminants in the smaller volume of slag. Surface contamination not removed to the slag is diluted in the ingot and is contained uniformly in the metal. This dilution and decontamination offers the potential of lower cost disposal such as shallow burial rather than placement in a national repository. A processing plan is proposed as a model for economic analysis of the collection and volume reduction of contaminated metals. Further development is required to demonstrate feasibility of the plan
Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.
2018-02-16
In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.
Multiple-level defect species evaluation from average carrier decay
Debuf, Didier
2003-10-01
An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.
Cobalt as a gastric juice volume marker: Comparison of two methods of estimation
International Nuclear Information System (INIS)
Gana, T.J.; MacPherson, B.R.; Ng, D.; Koo, J.
1990-01-01
We investigated the use of cobalt-EDTA, a novel, nonabsorbable liquid phase marker, in the estimation of secretory volumes during topical misoprostol (synthetic PGE, analog) administration in the canine chambered gastric segment. We compared atomic absorption spectrophotometry (AAS) and instrumental neutron activation analysis (INAA) in the estimation of [Co]. Mucosal bathing solutions containing cobalt-EDTA were instilled into and recovered from the chamber by gravity every 15-min period as follows: (i) basal--60 min; (ii) misoprostol periods--150 min (plus 0.1-, 1-, 10-, 100-, and 1000-micrograms doses of misoprostol for two periods per dose). The recovered solutions were analyzed for [Co] by AAS and INAA. Total cobalt recovery by AAS after chamber washout was 102.97 +/- 0.98%. Mean +/- SE volumes (12.14 +/- 0.33 and 13.24 +/- 0.60 ml/15 min) obtained respectively from AAS and INAA were significantly higher (P less than 0.001) than the recovered mean volumes (10.51 +/- 0.17 ml/15 min). The percentage error in volume collection increased (range: 9.3-52.7%) with the volume of secretion. Values of [Co] obtained by the two techniques were comparable and not significantly different from each other (P greater than 0.05). INAA-estimated mean +/- SE [Co] showed consistently higher coefficients of variation. Spectra obtained for all samples during INAA measurements showed significant Compton background activity from 24Na and 38Cl. Cobalt-EDTA did not grossly or histologically damage the gastric mucosa. We conclude that cobalt is not adsorbed, absorbed, or metabolized, and is a suitable and reliable volume marker in this model
Aikio, Sanna; Hiltunen, Jussi; Hiitola-Keinänen, Johanna; Hiltunen, Marianne; Kontturi, Ville; Siitonen, Samuli; Puustinen, Jarkko; Karioja, Pentti
2016-02-08
Flexible photonic integrated circuit technology is an emerging field expanding the usage possibilities of photonics, particularly in sensor applications, by enabling the realization of conformable devices and introduction of new alternative production methods. Here, we demonstrate that disposable polymeric photonic integrated circuit devices can be produced in lengths of hundreds of meters by ultra-high volume roll-to-roll methods on a flexible carrier. Attenuation properties of hundreds of individual devices were measured confirming that waveguides with good and repeatable performance were fabricated. We also demonstrate the applicability of the devices for the evanescent wave sensing of ambient refractive index. The production of integrated photonic devices using ultra-high volume fabrication, in a similar manner as paper is produced, may inherently expand methods of manufacturing low-cost disposable photonic integrated circuits for a wide range of sensor applications.
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
International Nuclear Information System (INIS)
Shibata, Takahiro; Honda, Youichi; Kashiwagi, Hidehiko
1996-01-01
Acoustic quantification method (AQ: on-line automated boundary detection system) has proved to have a good correlation with left ventriculography (LVG) and scintigraphy (SG) in patients with normal left ventricular (LV) function. The aim of this study is to determine whether AQ is also useful in patients with abnormal LV function. We examined 54 patients with LV asynergy. End-diastolic volumes with AQ, LVG and SG were 77, 135, 118 ml. A good correlation was found between AQ and LVG and SG (LVG; r=0.81, SG; r=0.68). End-systolic volumes with AQ, LVG and SG were 38, 64 and 57 ml. Left ventricular volumes obtained from AQ had a good correlation with LVG and SG, but were underestimated. LV ejection fraction obtained from AQ had good correlation with those with LVG and SG (LVG; r=0.84. SG; r=0.77). On-line AQ appears to be a useful noninvasive method for evaluation of the left ventricular ejection fraction, but care must be exercised when estimations of left ventricular volumes are made. (author)
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
International Nuclear Information System (INIS)
Coelho, Pedro J.
2014-01-01
Many methods are available for the solution of radiative heat transfer problems in participating media. Among these, the discrete ordinates method (DOM) and the finite volume method (FVM) are among the most widely used ones. They provide a good compromise between accuracy and computational requirements, and they are relatively easy to integrate in CFD codes. This paper surveys recent advances on these numerical methods. Developments concerning the grid structure (e.g., new formulations for axisymmetrical geometries, body-fitted structured and unstructured meshes, embedded boundaries, multi-block grids, local grid refinement), the spatial discretization scheme, and the angular discretization scheme are described. Progress related to the solution accuracy, solution algorithm, alternative formulations, such as the modified DOM and FVM, even-parity formulation, discrete-ordinates interpolation method and method of lines, and parallelization strategies is addressed. The application to non-gray media, variable refractive index media, and transient problems is also reviewed. - Highlights: • We survey recent advances in the discrete ordinates and finite volume methods. • Developments in spatial and angular discretization schemes are described. • Progress in solution algorithms and parallelization methods is reviewed. • Advances in the transient solution of the radiative transfer equation are appraised. • Non-gray media and variable refractive index media are briefly addressed
Energy Technology Data Exchange (ETDEWEB)
Lee, Soo Yong; Lim, Sang Wook; Ma, Sun Young; Yu, Je Sang [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Kosin University College of Medicine, Busan (Korea, Republic of)
2017-09-15
To see the gross tumor volume (GTV) dependency according to the phase selection and reconstruction methods, we measured and analyzed the changes of tumor volume and motion at each phase in 20 cases with lung cancer patients who underwent image-guided radiotherapy. We retrospectively analyzed four-dimensional computed tomography (4D-CT) images in 20 cases of 19 patients who underwent image-guided radiotherapy. The 4D-CT images were reconstructed by the maximum intensity projection (MIP) and the minimum intensity projection (Min-IP) method after sorting phase as 40%–60%, 30%–70%, and 0%–90%. We analyzed the relationship between the range of motion and the change of GTV according to the reconstruction method. The motion ranges of GTVs are statistically significant only for the tumor motion in craniocaudal direction. The discrepancies of GTV volume and motion between MIP and Min-IP increased rapidly as the wider ranges of duty cycles are selected. As narrow as possible duty cycle such as 40%–60% and MIP reconstruction was suitable for lung cancer if the respiration was stable. Selecting the reconstruction methods and duty cycle is important for small size and for large motion range tumors.
Kou, Jisheng
2017-06-09
In this paper, a new three-field weak formulation for Stokes problems is developed, and from this, a dual-mixed finite element method is proposed on a rectangular mesh. In the proposed mixed methods, the components of stress tensor are approximated by piecewise constant functions or Q1 functions, while the velocity and pressure are discretized by the lowest-order Raviart-Thomas element and the piecewise constant functions, respectively. Using quadrature rules, we demonstrate that this scheme can be reduced into a finite volume method on staggered grid, which is extensively used in computational fluid mechanics and engineering.
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Directory of Open Access Journals (Sweden)
Nelson H. T. Lemes
2010-01-01
Full Text Available Analytical solutions of a cubic equation with real coefficients are established using the Cardano method. The method is first applied to simple third order equation. Calculation of volume in the van der Waals equation of state is afterwards established. These results are exemplified to calculate the volumes below and above critical temperatures. Analytical and numerical values for the compressibility factor are presented as a function of the pressure. As a final example, coexistence volumes in the liquid-vapor equilibrium are calculated. The Cardano approach is very simple to apply, requiring only elementary operations, indicating an attractive method to be used in teaching elementary thermodynamics.
Hu, Ting-Ting; Yan, Ling; Yan, Peng-Fei; Wang, Xuan; Yue, Ge-Fen
2016-01-01
Epidural hematoma volume (EDHV) is an independent predictor of prognosis in patients with epidural hematoma (EDH) and plays a central role in treatment decision making. This study's objective was to determine the accuracy and reliability of the widely used volume measurement method ABC/2 in estimating EDHV by comparing it to the computer-assisted planimetric method. A data set of computerized tomography (CT) scans of 35 patients with EDH was evaluated to determine the accuracy of ABC/2 method, using computer-assisted planimetric technique to establish the reference criterion of EDHV for each patient. Another data set was constructed by randomly selecting 5 patients then replicating each case twice to yield 15 patients. Intra- and interobserver reliability were evaluated by asking four observers to independently estimate EDHV for the latter data set using the ABC/2 method. Estimation of EDHV using the ABC/2 method showed high intra- and interobserver reliability (intra-class correlation coefficient = .99). These estimates were closely correlated with planimetric measures (r = .99). But the ABC/2 method generally overestimated EDHV, especially in the nonellipsoid-like group. The difference between the ABC/2 measures and planimetric measures was statistically significant (p ABC/2 method could be used for EDHV measurement, which would contribute to treatment decision making as well as clinical outcome prediction. However, clinicians should be aware that the ABC/2 method results in a general volume overestimation. Future studies focusing on justification of the technique to improve its accuracy would be of practical value. © The Author(s) 2015.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
International Nuclear Information System (INIS)
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes
ATHENA code manual. Volume 1. Code structure, system models, and solution methods
International Nuclear Information System (INIS)
Carlson, K.E.; Roth, P.A.; Ransom, V.H.
1986-09-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation
International Nuclear Information System (INIS)
Baruchel, J.; Hodeau, J.L.; Lehmann, M.S.; Regnard, J.R.; Schlenker, C.
1993-01-01
This book provides the basic information required by a research scientist wishing to undertake studies using neutrons or synchrotron radiation at a Large Facility. These lecture notes result from 'HERCULES', a course that has been held in Grenoble since 1991 to train young scientists in these fields. They cover the production of neutrons and synchrotron radiation and describe all aspects of instrumentation. In addition, this work outlines the basics of the various fields of research pursued at these Large Facilities. It consists of a series of chapters written by experts in the particular fields. While following a progression and constituting a lecture course on neutron and x-ray scattering, these chapters can also be read independently. This first volume will be followed by two further volumes concerned with the applications to solid state physics and chemistry, and to biology and soft condensed matter properties
Energy Technology Data Exchange (ETDEWEB)
Ou, Ming-Ching; Chuang, Ming-Tsung [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Lin, Xi-Zhang [Department of Internal Medicine, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Tsai, Hong-Ming; Chen, Shu-Yuan [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China); Liu, Yi-Sheng, E-mail: taicheng100704@yahoo.com.tw [Department of Diagnostic Radiology, National Cheng-Kung University Hospital, No. 138 Sheng Li Road, Tainan 704, Taiwan, ROC (China)
2013-08-15
Purpose: To evaluate the efficacy of estimating the volume of spleen embolized in partial splenic embolization (PSE) by measuring the diameters of the splenic artery and its branches. Materials and methods: A total of 43 liver cirrhosis patients (mean age, 62.19 ± 9.65 years) with thrombocytopenia were included. Among these, 24 patients underwent a follow-up CT scan which showed a correlation between angiographic estimation and measured embolized splenic volume. Estimated splenic embolization volume was calculated by a method based on diameters of the splenic artery and its branches. The diameters of each of the splenic arteries and branches were measured via 2D angiographic images. Embolization was performed with gelatin sponges. Patients underwent follow-up with serial measurement of blood counts and liver function tests. The actual volume of embolized spleen was determined by computed tomography (CT) measuring the volumes of embolized and non-embolized spleen two months after PSE. Results: PSE was performed without immediate major complications. The mean WBC count significantly increased from 3.81 ± 1.69 × 10{sup 3}/mm{sup 3} before PSE to 8.56 ± 3.14 × 10{sup 3}/mm{sup 3} at 1 week after PSE (P < 0.001). Mean platelet count significantly increased from 62.00 ± 22.62 × 10{sup 3}/mm{sup 3} before PSE to 95.40 ± 46.29 × 10{sup 3}/mm{sup 3} 1 week after PSE (P < 0.001). The measured embolization ratio was positively correlated with estimated embolization ratio (Spearman's rho [ρ] = 0.687, P < 0.001). The mean difference between the actual embolization ratio and the estimated embolization ratio was 16.16 ± 8.96%. Conclusions: The method provides a simple method to quantitatively estimate embolized splenic volume with a correlation of measured embolization ratio to estimated embolization ratio of Spearman's ρ = 0.687.
International Nuclear Information System (INIS)
Suzuki, Tatsuya
2015-01-01
We have proposed the reprocessing system with nuclide separation processes based on the chromatographic technique in the hydrochloric acid solution system. Our proposing system consists of the dissolution process, the reprocessing process, the MA separation process, and nuclide separation processes. In our proposing processes, the pyridine resin is used as a main separation media. We expect that our proposing will contribute to that volume reduction of high level radioactive waste by combining the transmutation techniques, usage of valuable elements, and so on. (author)
Development of volume reduction method of cesium contaminated soil with magnetic separation
International Nuclear Information System (INIS)
Yukumatsu, Kazuki; Nomura, Naoki; Mishima, Fumihito; Akiyama, Yoko; Nishijima, Shigehiro
2016-01-01
In this study, we developed a new volume reduction technique for cesium contaminated soil by magnetic separation. Cs in soil is mainly adsorbed on clay which is the smallest particle constituent in the soil, especially on paramagnetic 2:1 type clay minerals which strongly adsorb and fix Cs. Thus selective separation of 2:1 type clay with a superconducting magnet could enable to reduce the volume of Cs contaminated soil. The 2:1 type clay particles exist in various particle sizes in the soil, which leads that magnetic force and Cs adsorption quantity depend on their particle size. Accordingly, we examined magnetic separation conditions for efficient separation of 2:1 type clay considering their particle size distribution. First, the separation rate of 2:1 type clay for each particle size was calculated by particle trajectory simulation, because magnetic separation rate largely depends on the objective size. According to the calculation, 73 and 89 % of 2:1 type clay could be separated at 2 and 7 T, respectively. Moreover we calculated dose reduction rate on the basis of the result of particle trajectory simulation. It was indicated that 17 and 51 % of dose reduction would be possible at 2 and 7 T, respectively. The difference of dose reduction rate at 2 T and 7 T was found to be separated a fine particle. It was shown that magnetic separation considering particle size distribution would contribute to the volume reduction of contaminated soil
The method of estimating the irradiated lung volume in primary breast irradiation
International Nuclear Information System (INIS)
Leite, Miguel Torres Teixeira; Marques, Iara Silva; Geraldo, Jony Marques
1999-01-01
Tangential breast fields irradiation usually includes some volume of lung and it is occasionally associated with pneumonitis. The estimation of the amount of lung irradiated can be determined measuring the central lung distance (CLD) by the port films, and it must be inferior to 2.5 cm. The purpose of this study was to determine through a linear regression analysis the relationship between CLD and the geometrical parameters of the treatment, and to develop an equation to predict this volume. The studied population consisted of 100 patients who received definitive radiation for clinical stage I and II breast cancer between January, 1996 and June, 1997. According to the contour of the breast and thorax was determined the angle of the tangential fields. In 71% of the patients the CLD measured by the portal films were superior to 2.5 cm, requiring a new beam arrangement. We develop a simple and convenient quantitative model to predict the irradiated lung volume based on portal films. We need further analysis in order to include variables and antomical variations. (author)
Dodd, Michael; Ferrante, Antonino
2017-11-01
Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Buckey, J. C.; Beattie, J. M.; Gaffney, F. A.; Nixon, J. V.; Blomqvist, C. G.
1984-01-01
Accurate, reproducible, and non-invasive means for ventricular volume determination are needed for evaluating cardiovascular function zero-gravity. Current echocardiographic methods, particularly for the right ventricle, suffer from a large standard error. A new mathematical approach, recently described by Watanabe et al., was tested on 1 normal formalin-fixed human hearts suspended in a mineral oil bath. Volumes are estimated from multiple two-dimensional echocardiographic views recorded from a single point at sequential angles. The product of sectional cavity area and center of mass for each view summed over the range of angles (using a trapezoidal rule) gives volume. Multiple (8-14) short axis right ventricle and left ventricle views at 5.0 deg intervals were videotaped. The images were digitized by two independent observers (leading-edge to leading-edge technique) and analyzed using a graphics tablet and microcomputer. Actual volumes were determined by filling the chambers with water. These data were compared to the mean of the two echo measurements.
Tone, Kiyoshi; Kojima, Keiko; Hoshiai, Keita; Kumagai, Naoya; Kijima, Hiroshi; Kurose, Akira
2016-06-01
The essential of urine cytology for the diagnosis and the follow-up of urothelial neoplasia has been widely recognized. However, there are some cases in which a definitive diagnosis cannot be made due to difficulty in discriminating between benign and malignant. This study evaluated the practicality of nucleolar/nuclear volume ratio (%) for the discrimination. Using Papanicolaou-stained slides, 253 benign urothelial cells and 282 malignant urothelial cells were selected and divided into a benign urothelial cell and an urothelial carcinoma (UC) cell groups. Three suspicious cases and four cases in which discrimination between benign and malignant was difficult were prepared for verification test. Subject cells were decolorized and stained with 4',6-diamidino-2-phenylindole for detection of the nuclei and the nucleoli. Z-stack method was performed to analyze. When the cutoff point of 1.514% discriminating benign urothelial cells and UC cells from nucleolar/nuclear volume ratio (%) was utilized, the sensitivity was 56.0%, the specificity was 88.5%, the positive predictive value was 84.5%, and the negative predictive value was 64.4%. Nuclear and nucleolar volume, number of the nucleoli, and nucleolar/nuclear volume ratio (%) were significantly higher in the UC cell group than in the benign urothelial cell group (P benign and malignant urothelial cells, providing possible additional information in urine cytology. Diagn. Cytopathol. 2016;44:483-491. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Ma, Xiao; Wang, Moo-Chin; Feng, Jinyang; Zhao, Xiujian
2015-01-01
The effect of solution volume covariation on the growth mechanism of Au nanorods synthesized using a seed-mediated method was studied. The results from the ultraviolet–visible absorption spectra of gold nanorods (GNRs) revealed that the transverse surface plasmon resonance was ∼550 nm for all GNR samples synthesized in various total volumes of growth solutions. The wavelength of longitudinal surface plasmon resonance of GNRs increased from 757 to 915 nm, with the total volume of growth solution being raised from 10 to 320 ml. Moreover, the calculated aspect ratio (AR) also increased from 3.55 to 5.21 while the total volume of growth solution increased from 10 to 320 ml. Transmission electron microscopy microstructures showed that the growth mechanism of GNRs along 〈1 0 0〉 is in accordance with the hypothesis that the ratio of the number of monodispersed Au atoms existing in the growth solution to the number of seeds explain the behavior of Au atoms deposited on the nanorods with respect to all of the constituent concentrations in the growth solution on the AR of GNRs
International Nuclear Information System (INIS)
Lan Yihua; Li Cunhua; Ren Haozheng; Zhang Yong; Min Zhifang
2012-01-01
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose–volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose–volume constraints, and then the dose constraints for the voxels violating the dose–volume constraints are gradually added into the quadratic optimization model step by step until all the dose–volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head–neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose
Energy Technology Data Exchange (ETDEWEB)
Jimenez, M.; Bego, L.; Douls, Y.; Le Dean, P.; Paradowski, V. [Gaz de France, GDF, Dir. de la Recherche, 75 - Paris (France)
2000-07-01
Builders now have perfect command of the natural gas heating technique used for large-volume buildings. However, the sizing of heating facilities still leaves grounds for discussion, whatever the energies actually used. Accordingly, between 1997 and 1999, the ATG (technical association of the Gas industry in France), seven French manufacturers of 'large volume' heating equipment, the Chaleur Et Rayonnement (CER) association and Gaz de France decided to collaborate and develop a 'unified sizing method' for heating facilities using radiating emitters. During the first year of the study, the above partners worked on the said method (theoretical thermal study of the radiative phenomena, and then adaptation to the methods currently used by the various manufacturers). In 1998, with the support of the ADEME (the French environment and energy control agency), the partners tested the method on five industrial buildings (studying the thermal behavior and making air renewal measurements with search gases). This work made it possible to either confirm or adapt the theoretical evaluations which had been made originally. In 1999, a software program was produced to make the developed method more user friendly. The program can be used to determine the power to be installed, but also to assess the quality of the chosen configuration of the emitters (unit power, inclination, orientation) for optimum customer comfort. (authors)
International Nuclear Information System (INIS)
Das, Ranjan; Mishra, Subhash C.; Ajith, M.; Uppaluri, R.
2008-01-01
This article deals with the simultaneous estimation of parameters in a 2-D transient conduction-radiation heat transfer problem. The homogeneous medium is assumed to be absorbing, emitting and scattering. The boundaries of the enclosure are diffuse gray. Three parameters, viz. the scattering albedo, the conduction-radiation parameter and the boundary emissivity, are simultaneously estimated by the inverse method involving the lattice Boltzmann method (LBM) and the finite volume method (FVM) in conjunction with the genetic algorithm (GA). In the direct method, the FVM is used for computing the radiative information while the LBM is used to solve the energy equation. The temperature field obtained in the direct method is used in the inverse method for simultaneous estimation of unknown parameters using the LBM-FVM and the GA. The LBM-FVM-GA combination has been found to accurately predict the unknown parameters
Energy Technology Data Exchange (ETDEWEB)
Pinheiro, Monica Araujo
2016-10-01
Alzheimer's disease is a neurodegenerative disorder, on which occurs a progressive and irreversible destruction of neurons. According to the World Health Organization (WHO) 35.6 million people are living with dementia, being recommended that governments prioritize early diagnosis techniques. Laboratory and psychological tests for cognitive assessment are conducted and further complemented by neurological imaging from nuclear medicine exams in order to establish an accurate diagnosis. The image quality evaluation and reconstruction process effects are important tools in clinical routine. In the present work, these quality parameters were studied, and the effects of partial volume (PVE) for lesions of different sizes and geometries that are attributed to the limited resolution of the equipment. In dementia diagnosis, this effect can be confused with intake losses due to cerebral cortex atrophy. The evaluation was conducted by two phantoms of different shapes as suggested by (a) American College of Radiology (ACR) and (b) National Electrical Manufacturers Association (NEMA) for Contrast, Contrast-to-Noise Ratio (CNR) and Recovery Coefficient (RC) calculation versus lesions shape and size. Technetium-99m radionuclide was used in a local brain scintigraphy protocol, for proportions lesion to background of 2:1, 4:1, 6:1, 8:1 and 10:1. Fourteen reconstruction methods were used for each concentration applying different filters and algorithms. Before the analysis of all image properties, the conclusion is that the predominant effect is the partial volume, leading to errors of measurement of more than 80%. Furthermore, it was demonstrate that the most effective method of reconstruction is FBP with Metz filter, providing better contrast and contrast to noise ratio results. In addition, this method shows the best Recovery Coefficients correction for each lesion. The ACR phantom showed the best results assigned to a more precise reconstruction of a cylinder, which does not
Coupling of a 3-D vortex particle-mesh method with a finite volume near-wall solver
Marichal, Y.; Lonfils, T.; Duponcheel, M.; Chatelain, P.; Winckelmans, G.
2011-11-01
This coupling aims at improving the computational efficiency of high Reynolds number bluff body flow simulations by using two complementary methods and exploiting their respective advantages in distinct parts of the domain. Vortex particle methods are particularly well suited for free vortical flows such as wakes or jets (the computational domain -with non zero vorticity- is then compact and dispersion errors are negligible). Finite volume methods, however, can handle boundary layers much more easily due to anisotropic mesh refinement. In the present approach, the vortex method is used in the whole domain (overlapping domain technique) but its solution is highly underresolved in the vicinity of the wall. It thus has to be corrected by the near-wall finite volume solution at each time step. Conversely, the vortex method provides the outer boundary conditions for the near-wall solver. A parallel multi-resolution vortex particle-mesh approach is used here along with an Immersed Boundary method in order to take the walls into account. The near-wall flow is solved by OpenFOAM® using the PISO algorithm. We validate the methodology on the flow past a sphere at a moderate Reynolds number. F.R.S. - FNRS Research Fellow.
Rao, R Venkata
2013-01-01
Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods presents the concepts and details of applications of MADM methods. A range of methods are covered including Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VIšekriterijumsko KOmpromisno Rangiranje (VIKOR), Data Envelopment Analysis (DEA), Preference Ranking METHod for Enrichment Evaluations (PROMETHEE), ELimination Et Choix Traduisant la Realité (ELECTRE), COmplex PRoportional ASsessment (COPRAS), Grey Relational Analysis (GRA), UTility Additive (UTA), and Ordered Weighted Averaging (OWA). The existing MADM methods are improved upon and three novel multiple attribute decision making methods for solving the decision making problems of the manufacturing environment are proposed. The concept of integrated weights is introduced in the proposed subjective and objective integrated weights (SOIW) method and the weighted Euclidean distance ba...
International Nuclear Information System (INIS)
Watanabe, Yoshirou; Sakai, Akira; Inada, Mitsuo; Shiraishi, Tomokuni; Kobayashi, Akitoshi
1982-01-01
S2-gated (the second heart sound) method was designed by authors. In 6 normal subjects and 16 patients (old myocardial infarction 12 cases, hypertension 2 cases and aortic regurgitation 2 cases), radioisotope (RI) angiography using S2-gated equilibrium method was performed. In RI angiography, sup(99m)Tc-human serum albumin (HSA) 555MBq (15mCi) as tracer, PDP11/34 as minicomputer and PCG/ECG symchromizer (Metro Inst.) were used. Then left ventricular (LV) volume curve by S2-gated and electrocardiogram (ECG) R wave-gated method were obtained. Using LV volume curve, left ventricular ejection fraction (EF), mean ejection rate (mER, s -1 ), mean filling rate (mFR, -1 ) and rapid filling fraction (RFF) were calculated. mFR indicated mean filling rate during rapid filling phase. RFF was defined as the filling fraction during rapid filling phase among stroke volume. S2-gated method was reliable in evaluation of early diastolic phase, compared with ECG-gated method. There was the difference between RFF in normal group and myocardial infarction (MI) group (p < 0.005). RFF in 2 groups were correlated with EF (r = 0.82, p < 0.01). RFF was useful in evaluating MI cases who had normal EF values. The comparison with mER by ECG-gated and mFR by S2-gated was useful in evaluating MI cases who had normal mER values. mFR was remarkably lower than mER in MI group, but was equal to mER in normal group approximately. In conclusion, the evaluation using RFF and mFR by S2-gated method was useful in MI cases who had normal systolic phase indices. (author)
International Nuclear Information System (INIS)
Gagne, Nolan L.; Leonard, Kara L.; Huber, Kathryn E.; Mignano, John E.; Duker, Jay S.; Laver, Nora V.; Rivard, Mark J.
2012-01-01
Purpose: A method is introduced to examine the influence of implant duration T, radionuclide, and radiobiological parameters on the biologically effective dose (BED) throughout the entire volume of regions of interest for episcleral brachytherapy using available radionuclides. This method is employed to evaluate a particular eye plaque brachytherapy implant in a radiobiological context. Methods: A reference eye geometry and 16 mm COMS eye plaque loaded with 103 Pd, 125 I, or 131 Cs sources were examined with dose distributions accounting for plaque heterogeneities. For a standardized 7 day implant, doses to 90% of the tumor volume ( TUMOR D 90 ) and 10% of the organ at risk volumes ( OAR D 10 ) were calculated. The BED equation from Dale and Jones and published α/β and μ parameters were incorporated with dose volume histograms (DVHs) for various T values such as T = 7 days (i.e., TUMOR 7 BED 10 and OAR 7 BED 10 ). By calculating BED throughout the volumes, biologically effective dose volume histograms (BEDVHs) were developed for tumor and OARs. Influence of T, radionuclide choice, and radiobiological parameters on TUMOR BEDVH and OAR BEDVH were examined. The nominal dose was scaled for shorter implants to achieve biological equivalence. Results: TUMOR D 90 values were 102, 112, and 110 Gy for 103 Pd, 125 I, and 131 Cs, respectively. Corresponding TUMOR 7 BED 10 values were 124, 140, and 138 Gy, respectively. As T decreased from 7 to 0.01 days, the isobiologically effective prescription dose decreased by a factor of three. As expected, TUMOR 7 BEDVH did not significantly change as a function of radionuclide half-life but varied by 10% due to radionuclide dose distribution. Variations in reported radiobiological parameters caused TUMOR 7 BED 10 to deviate by up to 46%. Over the range of OAR α/β values, OAR 7 BED 10 varied by up to 41%, 3.1%, and 1.4% for the lens, optic nerve, and lacrimal gland, respectively. Conclusions: BEDVH permits evaluation of the
International Nuclear Information System (INIS)
Mayr, Nina A.; Taoka, Toshiaki; Yuh, William T.C.; Denning, Leah M.; Zhen, Weining K.; Paulino, Arnold C.; Gaston, Robert C.; Sorosky, Joel I.; Meeks, Sanford L.; Walker, Joan L.; Mannel, Robert S.; Buatti, John M.
2002-01-01
Purpose: Recently, imaging-based tumor volume before, during, and after radiation therapy (RT) has been shown to predict tumor response in cervical cancer. However, the effectiveness of different methods and timing of imaging-based tumor size assessment have not been investigated. The purpose of this study was to compare the predictive value for treatment outcome derived from simple diameter-based ellipsoid tumor volume measurement using orthogonal diameters (with ellipsoid computation) with that derived from more complex contour tracing/region-of-interest (ROI) analysis 3D tumor volumetry. Methods and Materials: Serial magnetic resonance imaging (MRI) examinations were prospectively performed in 60 patients with advanced cervical cancer (Stages IB 2 -IVB/recurrent) at the start of RT, during early RT (20-25 Gy), mid-RT (45-50 Gy), and at follow-up (1-2 months after RT completion). ROI-based volumetry was derived by tracing the entire tumor region in each MR slice on the computer work station. For the diameter-based surrogate ''ellipsoid volume,'' the three orthogonal diameters (d 1 , d 2 , d 3 ) were measured on film hard copies to calculate volume as an ellipsoid (d 1 x d 2 x d 3 x π/6). Serial tumor volumes and regression rates determined by each method were correlated with local control, disease-free and overall survival, and the results were compared between the two measuring methods. Median post-therapy follow-up was 4.9 years (range, 2.0-8.2 years). Results: The best method and time point of tumor size measurement for the prediction of outcome was the tumor regression rate in the mid-therapy MRI examination (at 45-50 Gy) using 3D ROI volumetry. For the pre-RT measurement both the diameter-based method and ROI volumetry provided similar predictive accuracy, particularly for patients with small ( 3 ) and large (≥100 cm 3 ) pre-RT tumor size. However, the pre-RT tumor size measured by either method had much less predictive value for the intermediate-size (40
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A
2014-12-01
Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.
A non-invasive method of quantifying pancreatic volume in mice using micro-MRI.
Directory of Open Access Journals (Sweden)
Jose L Paredes
Full Text Available In experimental models of pancreatic growth and recovery, changes in pancreatic size are assessed by euthanizing a large cohort of animals at varying time points and measuring organ mass. However, to ascertain this information in clinical practice, patients with pancreatic disorders routinely undergo non-invasive cross-sectional imaging of the pancreas using magnetic resonance imaging (MRI or computed tomography (CT. The aim of the current study was to develop a thin-sliced, optimized sequence protocol using a high field MRI to accurately calculate pancreatic volumes in the most common experimental animal, the mouse. Using a 7 Telsa Bruker micro-MRI system, we performed abdominal imaging in whole-fixed mice in three standard planes: axial, sagittal, and coronal. The contour of the pancreas was traced using Vitrea software and then transformed into a 3-dimensional (3D reconstruction, from which volumetric measurements were calculated. Images were optimized using heart perfusion-fixation, T1 sequence analy