Volume calculation of the spur gear billet for cold precision forging with average circle method
Institute of Scientific and Technical Information of China (English)
Wangjun Cheng; Chengzhong Chi; Yongzhen Wang; Peng Lin; Wei Liang; Chen Li
2014-01-01
Forging spur gears are widely used in the driving system of mining machinery and equipment due to their higher strength and dimensional accuracy. For the purpose of precisely calculating the volume of cylindrical spur gear billet in cold precision forging, a new theoretical method named average circle method was put forward. With this method, a series of gear billet volumes were calculated. Comparing with the accurate three-dimensional modeling method, the accuracy of average circle method by theoretical calculation was estimated and the maximum relative error of average circle method was less than 1.5%, which was in good agreement with the experimental results. Relative errors of the calculated and the experimental for obtaining the gear billet volumes with reference circle method are larger than those of the average circle method. It shows that average circle method possesses a higher calculation accuracy than reference circle method (traditional method), which should be worth popularizing widely in calculation of spur gear billet volume.
Heyes, D. M.; Smith, E. R.; Dini, D.; Zaki, T. A.
2011-07-01
It is shown analytically that the method of planes (MOP) [Todd, Evans, and Daivis, Phys. Rev. E 52, 1627 (1995)] and volume averaging (VA) [Cormier, Rickman, and Delph, J. Appl. Phys. 89, 99 (2001), 10.1063/1.1328406] formulas for the local pressure tensor, Pα, y(y), where α ≡ x, y, or z, are mathematically identical. In the case of VA, the sampling volume is taken to be an infinitely thin parallelepiped, with an infinite lateral extent. This limit is shown to yield the MOP expression. The treatment is extended to include the condition of mechanical equilibrium resulting from an imposed force field. This analytical development is followed by numerical simulations. The equivalence of these two methods is demonstrated in the context of non-equilibrium molecular dynamics (NEMD) simulations of boundary-driven shear flow. A wall of tethered atoms is constrained to impose a normal load and a velocity profile on the entrained central layer. The VA formula can be used to compute all components of Pαβ(y), which offers an advantage in calculating, for example, Pxx(y) for nano-scale pressure-driven flows in the x-direction, where deviations from the classical Poiseuille flow solution can occur.
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Li, Ri; Zhou, Liming; Wang, Jian; Li, Yan
2017-02-01
Based on solidification theory and a volume-averaged multiphase solidification model, the solidification process of NH4Cl-70 pct H2O was numerically simulated and experimentally verified. Although researchers have investigated the solidification process of NH4Cl-70 pct H2O, most existing studies have been focused on analysis of a single phenomenon, such as the formation of channel segregation, convection types, and the formation of grains. Based on prior studies, by combining numerical simulation and experimental investigation, all phenomena of the entire computational domain of the solidification process of an NH4Cl aqueous solution were comprehensively investigated for the first time in this study. In particular, the sedimentation of equiaxed grains in the ingot and the induced convection were reproduced. In addition, the formation mechanism of segregation was studied in depth. The calculation demonstrated that the equiaxed grains settled from the wall of the mold and gradually aggregated at the bottom of the mold; when the volume fraction reached a critical value, the columnar grains stopped growing, thus completing the columnar-to-equiaxed transition (CET). Because of solute partitioning, negative segregation occurred at the bottom region of the ingot concentrated with grains, whereas a wide range of positive segregation occurred in the unsolidified, upper part of the ingot. Experimental investigation indicated that the predicted results of the sedimentation of the equiaxed grains in the ingot and the convection types agreed well with the experimental results, thus revealing that the sedimentation of solid phase and convection in the solidification process are the key factors responsible for macrosegregation.
Energy Technology Data Exchange (ETDEWEB)
Barraclough, B; Lebron, S [J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL (United States); Li, J; Fan, Qiyong; Liu, C; Yan, G [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)
2015-06-15
Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and
A sixth order averaged vector field method
Li, Haochen; Wang, Yushun; Qin, Mengzhao
2014-01-01
In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...
Lattice Boltzmann Model for The Volume-Averaged Navier-Stokes Equations
Zhang, Jingfeng; Ouyang, Jie
2014-01-01
A numerical method, based on discrete lattice Boltzmann equation, is presented for solving the volume-averaged Navier-Stokes equations. With a modified equilibrium distribution and an additional forcing term, the volume-averaged Navier-Stokes equations can be recovered from the lattice Boltzmann equation in the limit of small Mach number by the Chapman-Enskog analysis and Taylor expansion. Due to its advantages such as explicit solver and inherent parallelism, the method appears to be more competitive with traditional numerical techniques. Numerical simulations show that the proposed model can accurately reproduce both the linear and nonlinear drag effects of porosity in the fluid flow through porous media.
A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport
Directory of Open Access Journals (Sweden)
Gilberto Espinosa-Paredes
2012-01-01
Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico
Energy Technology Data Exchange (ETDEWEB)
Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx
2008-07-01
This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...
Sample Selected Averaging Method for Analyzing the Event Related Potential
Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki
The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.
Grade-Average Method: A Statistical Approach for Estimating ...
African Journals Online (AJOL)
Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...
The averaging of nonlocal Hamiltonian structures in Whitham's method
Directory of Open Access Journals (Sweden)
Andrei Ya. Maltsev
2002-01-01
Full Text Available We consider the m-phase Whitham's averaging method and propose the procedure of averaging nonlocal Hamiltonian structures. The procedure is based on the existence of a sufficient number of local-commuting integrals of the system and gives the Poisson bracket of Ferapontov type for Whitham's system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.
Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
Discretized Volumes in Numerical Methods
Antal, Miklós
2007-01-01
We present two techniques novel in numerical methods. The first technique compiles the domain of the numerical methods as a discretized volume. Congruent elements are glued together to compile the domain over which the solution of a boundary value problem is sought. We associate a group and a graph to that volume. When the group is symmetry of the boundary value problem under investigation, one can specify the structure of the solution, and find out if there are equispectral volumes of a given type. The second technique uses a complex mapping to transplant the solution from volume to volume and a correction function. Equation for the correction function is given. A simple example demonstrates the feasibility of the suggested method.
An averaging method for nonlinear laminar Ekman layers
DEFF Research Database (Denmark)
Andersen, Anders Peter; Lautrup, B.; Bohr, T.
2003-01-01
We study steady laminar Ekman boundary layers in rotating systems using,an averaging method similar to the technique of von Karman and Pohlhausen. The method allows us to explore nonlinear corrections to the standard Ekman theory even at large Rossby numbers. We consider both the standard self...
Measurement of average density and relative volumes in a dispersed two-phase fluid
Sreepada, Sastry R.; Rippel, Robert R.
1992-01-01
An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.
Estimation of Otoacoustic Emision Signals by Using Synchroneous Averaging Method
Directory of Open Access Journals (Sweden)
Linas Sankauskas
2011-08-01
Full Text Available The study presents the investigation results of synchronous averaging method and its application in estimation of impulse evoked otoacoustic emission signals (IEOAE. The method was analyzed using synthetic and real signals. Synthetic signals were modeled as the mixtures of deterministic component with noise realizations. Two types of noise were used: normal (Gaussian and transient impulses dominated (Laplacian. Signal to noise ratio was used as the signal quality measure after processing. In order to account varying amplitude of deterministic component in the realizations weighted averaging method was investigated. Results show that the performance of synchronous averaging method is very similar in case of both types of noise Gaussian and Laplacian. Weighted averaging method helps to cope with varying deterministic component or noise level in case of nonhomogenous ensembles as is the case in IEOAE signal.Article in Lithuanian
Simple Moving Average: A Method of Reporting Evolving Complication Rates.
Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J
2016-09-01
Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].
The average free volume model for the ionic and simple liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 60 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. Some typical one atom liquids (molten metals and liquid noble gases) are introduced to verify this hypothesis. Good agreement between the theory prediction and experimental data can be obtained.
Directory of Open Access Journals (Sweden)
H. Matsueda
2010-02-01
Full Text Available Column-averaged volume mixing ratios of carbon dioxide (XCO2 during the period from January 2007 to May 2008 over Tsukuba, Japan, were derived by using CO2 concentration data observed by Japan Airlines Corporation (JAL commercial airliners, based on the assumption that CO2 profiles over Tsukuba and Narita were the same. CO2 profile data for 493 flights on clear-sky days were analysed in order to calculate XCO2 with an ancillary dataset: Tsukuba observational data (by rawinsonde and a meteorological tower or global meteorological data (NCEP and CIRA-86. The amplitude of seasonal variation of XCO2 (Tsukuba observational from the Tsukuba observational data was determined by least-squares fit using a harmonic function to roughly evaluate the seasonal variation over Tsukuba. The highest and lowest values of the obtained fitted curve in 2007 for XCO2 (Tsukuba observational were 386.4 and 381.7 ppm in May and September, respectively. The dependence of XCO2 on the type of ancillary dataset was evaluated. The average difference between XCO2 (global from global climatological data and XCO2 (Tsukuba observational, i.e., the bias of XCO2 (global based on XCO2 (Tsukuba observational, was found to be -0.621 ppm with a standard deviation of 0.682 ppm. The uncertainty of XCO2 (global based on XCO2 (Tsukuba observational was estimated to be 0.922 ppm. This small uncertainty suggests that the present method of XCO2 calculation using data from airliners and global climatological data can be applied to the validation of GOSAT products for XCO2 over airports worldwide.
Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers
Zhou, Feng; Catton, Ivan
2012-10-01
A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.
Analytic continuation average spectrum method for transport in quantum liquids
Energy Technology Data Exchange (ETDEWEB)
Kletenik-Edelman, Orly [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Rabani, Eran, E-mail: rabani@tau.ac.il [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Reichman, David R. [Department of Chemistry, Columbia University, 3000 Broadway, New York, NY 10027 (United States)
2010-05-12
Recently, we have applied the analytic continuation averaged spectrum method (ASM) to calculate collective density fluctuations in quantum liquid . Unlike the maximum entropy (MaxEnt) method, the ASM approach is capable of revealing resolved modes in the dynamic structure factor in agreement with experiments. In this work we further develop the ASM to study single-particle dynamics in quantum liquids with dynamical susceptibilities that are characterized by a smooth spectrum. Surprisingly, we find that for the power spectrum of the velocity autocorrelation function there are pronounced differences in comparison with the MaxEnt approach, even for this simple case of smooth unimodal dynamic response. We show that for liquid para-hydrogen the ASM is closer to the centroid molecular dynamics (CMD) result while for normal liquid helium it agrees better with the quantum mode coupling theory (QMCT) and with the MaxEnt approach.
ORDERED WEIGHTED AVERAGING AGGREGATION METHOD FOR PORTFOLIO SELECTION
Institute of Scientific and Technical Information of China (English)
LIU Shancun; QIU Wanhua
2004-01-01
Portfolio management is a typical decision making problem under incomplete,sometimes unknown, informationThis paper considers the portfolio selection problemsunder a general setting of uncertain states without probabilityThe investor's preferenceis based on his optimum degree about the nature, and his attitude can be described by anOrdered Weighted Averaging Aggregation functionWe construct the OWA portfolio selec-tion model, which is a nonlinear programming problemThe problem can be equivalentlytransformed into a mixed integer linear programmingA numerical example is given andthe solutions imply that the investor's strategies depend not only on his optimum degreebut also on his preference weight vectorThe general game-theoretical portfolio selectionmethod, max-min method and competitive ratio method are all the special settings of thismodel.
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see...... in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... $, where $\\tilde{\\mathbf K}$ is different from $\\mathbf K $; in a FEM scheme these matrices are equal following the principle of virtual work. Using a staggered mesh and averaging procedures consistent with the FVM the checkerboard problem is eliminated. Two averages are compared to FE solutions, being...
A Single Image Dehazing Method Using Average Saturation Prior
Directory of Open Access Journals (Sweden)
Zhenfei Gu
2017-01-01
Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
Berezhkovskii, Alexander M.; Weiss, George H.
1996-07-01
In order to extend the greatly simplified Smoluchowski model for chemical reaction rates it is necessary to incorporate many-body effects. A generalization with this feature is the so-called trapping model in which random walkers move among a uniformly distributed set of traps. The solution of this model requires consideration of the distinct number of sites visited by a single n-step random walk. A recent analysis [H. Larralde et al., Phys. Rev. A 45, 1728 (1992)] has considered a generalized version of this problem by calculating the average number of distinct sites visited by N n-step random walks. A related continuum analysis is given in [A. M. Berezhkovskii, J. Stat. Phys. 76, 1089 (1994)]. We consider a slightly different version of the general problem by calculating the average volume of the Wiener sausage generated by Brownian particles generated randomly in time. The analysis shows that two types of behavior are possible: one in which there is strong overlap between the Wiener sausages of the particles, and the second in which the particles are mainly independent of one another. Either one or both of these regimes occur, depending on the dimension.
Energy Technology Data Exchange (ETDEWEB)
Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Apartado Postal 55-535, Mexico D.F. 09340 (Mexico)
2010-05-15
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Institute of Scientific and Technical Information of China (English)
T. Wang; B.Pustal; M. Abondano; T. Grimmig; A. B(u)hrig-Polaczek; M. Wu; A. Ludwig
2005-01-01
The cooling channel process is a rehocasting method by which the prematerial with globular microstructure can be produced to fit the thixocasting process. A three-phase model based on volume averaging approach is proposed to simulate the cooling channel process of A356 Aluminum alloy. The three phases are liquid, solid and air respectively and treated as separated and interacting continua, sharing a single pressure field. The mass, momentum, enthalpy transport equations for each phase are solved. The developed model can predict the evolution of liquid, solid and air fraction as well as the distribution of grain density and grain size. The effect of pouring temperature on the grain density, grain size and solid fraction is analyzed in detail.
Equivalence of the generalized Lie-Hori method and the method of averaging. [in celestial mechanics
Ahmed, A. H.; Tapley, B. D.
1984-01-01
In this investigation, a comparison is made of two methods for developing perturbation theories for non-canonical dynamical systems. The methods compared are the generalized Lie-Hori method and the method of averaging. In the comparison presented here, the equivalence of the methods up to the second order in the small parameter is shown. However, the approach used can be extended to demonstrate the equivalence for higher orders. To illustrate the equivalence Duffing's equation, the van der Pol equation and the oscillator with quadratic damping problem are solved using each method.
20 CFR 404.210 - Average-indexed-monthly-earnings method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-indexed-monthly-earnings method. 404... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Indexed-Monthly-Earnings Method of Computing Primary Insurance Amounts § 404.210 Average-indexed-monthly-earnings method. (a) Who is...
Robust numerical methods for conservation laws using a biased averaging procedure
Choi, Hwajeong
In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.
Kabala, Z. J.
1997-08-01
Under the assumption that local solute dispersion is negligible, a new general formula (in the form of a convolution integral) is found for the arbitrary k-point ensemble moment of the local concentration of a solute convected in arbitrary m spatial dimensions with general sure initial conditions. From this general formula new closed-form solutions in m=2 spatial dimensions are derived for 2-point ensemble moments of the local solute concentration for the impulse (Dirac delta) and Gaussian initial conditions. When integrated over an averaging window, these solutions lead to new closed-form expressions for the first two ensemble moments of the volume-averaged solute concentration and to the corresponding concentration coefficients of variation (CV). Also, for the impulse (Dirac delta) solute concentration initial condition, the second ensemble moment of the solute point concentration in two spatial dimensions and the corresponding CV are demonstrated to be unbound. For impulse initial conditions the CVs for volume-averaged concentrations axe compared with each other for a tracer from the Borden aquifer experiment. The point-concentration CV is unacceptably large in the whole domain, implying that the ensemble mean concentration is inappropriate for predicting the actual concentration values. The volume-averaged concentration CV decreases significantly with an increasing averaging volume. Since local dispersion is neglected, the new solutions should be interpreted as upper limits for the yet to be derived solutions that account for local dispersion; and so should the presented CVs for Borden tracers. The new analytical solutions may be used to test the accuracy of Monte Carlo simulations or other numerical algorithms that deal with the stochastic solute transport. They may also be used to determine the size of the averaging volume needed to make a quasi-sure statement about the solute mass contained in it.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.
Yong, Liu; Dingfa, Huang; Yong, Jiang
2012-07-20
Temporal phase unwrapping is an important method for shape measurement in structured light projection. Its measurement errors mainly come from both the camera noise and nonlinearity. Analysis found that least-squares fitting cannot completely eliminate nonlinear errors, though it can significantly reduce the random errors. To further reduce the measurement errors of current temporal phase unwrapping algorithms, in this paper, we proposed a phase averaging method (PAM) in which an additional fringe sequence at the highest fringe density is employed in the process of data processing and the phase offset of each set of the four frames is carefully chosen according to the period of the phase nonlinear errors, based on fast classical temporal phase unwrapping algorithms. This method can decrease both the random errors and the systematic errors with statistical averaging. In addition, the length of the additional fringe sequence can be changed flexibly according to the precision of the measurement. Theoretical analysis and simulation experiment results showed the validity of the proposed method.
COMPLEX INNER PRODUCT AVERAGING METHOD FOR CALCULATING NORMAL FORM OF ODE
Institute of Scientific and Technical Information of China (English)
陈予恕; 孙洪军
2001-01-01
This paper puts forward a complex inner product averaging method for calculating normal form of ODE. Compared with conventional averaging method, the theoretic analytical process has such simple forms as to realize computer program easily.Results can be applied in both autonomous and non-autonomous systems. At last, an example is resolved to verify the method.
Programmatic methods for addressing contaminated volume uncertainties.
Energy Technology Data Exchange (ETDEWEB)
DURHAM, L.A.; JOHNSON, R.L.; RIEMAN, C.R.; SPECTOR, H.L.; Environmental Science Division; U.S. ARMY CORPS OF ENGINEERS BUFFALO DISTRICT
2007-01-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the preremedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in predesign data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in predesign characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland1, Ashland2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate predesign contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District.
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177
Kováčik, L; Kereïche, S; Matula, P; Raška, I
2014-01-01
Electron tomographic reconstructions suffer from a number of artefacts arising from effects accompanying the processes of acquisition of a set of tilted projections of the specimen in a transmission electron microscope and from its subsequent computational handling. The most pronounced artefacts usually come from imprecise projection alignment, distortion of specimens during tomogram acquisition and from the presence of a region of missing data in the Fourier space, the "missing wedge". The ray artefacts caused by the presence of the missing wedge can be attenuated by the angular image filter, which attenuates the transition between the data and the missing wedge regions. In this work, we present an analysis of the influence of angular filtering on the resolution of averaged repetitive structural motives extracted from three-dimensional reconstructions of tomograms acquired in the single-axis tilting geometry.
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see...... the well known Reuss lower bound. [1] Bendsøe, M.P.; Sigmund, O. 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, H. K.; W. Malalasekera 1995: An introduction to Computational Fluid Dynamics: the Finite Volume Method. London: Longman......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...
Volume-Averaged Model of Inductively-Driven Multicusp Ion Source
Patel, Kedar K.; Lieberman, M. A.; Graf, M. A.
1998-10-01
A self-consistent spatially averaged model of high-density oxygen and boron trifluoride discharges has been developed for a 13.56 MHz, inductively coupled multicusp ion source. We determine positive ion, negative ion, and electron densities, the ground state and metastable densities, and the electron temperature as functions of the control parameters: gas pressure, gas flow rate, input power and reactor geometry. Neutralization and fragmentation into atomic species are assumed for all ions hitting the wall. For neutrals, a wall recombination coefficient for oxygen atoms and a wall sticking coefficient for boron trifluoride (BF_3) and its dissociation products are the single adjustable parameters used to model the surface chemistry. For the aluminum walls of the ion source used in the Eaton ULE2 ion implanter, complete wall recombination of O atoms is found to give the best match to the experimental data for oxygen, whereas a sticking coefficient of 0.62 for all neutral species in a BF3 discharge was found to best match experimental data.
Solving hyperbolic equations with finite volume methods
Vázquez-Cendón, M Elena
2015-01-01
Finite volume methods are used in numerous applications and by a broad multidisciplinary scientific community. The book communicates this important tool to students, researchers in training and academics involved in the training of students in different science and technology fields. The selection of content is based on the author’s experience giving PhD and master courses in different universities. In the book the introduction of new concepts and numerical methods go together with simple exercises, examples and applications that contribute to reinforce them. In addition, some of them involve the execution of MATLAB codes. The author promotes an understanding of common terminology with a balance between mathematical rigor and physical intuition that characterizes the origin of the methods. This book aims to be a first contact with finite volume methods. Once readers have studied it, they will be able to follow more specific bibliographical references and use commercial programs or open source software withi...
An Improved Velocity Volume Processing Method
Institute of Scientific and Technical Information of China (English)
LI Nan; WEI Ming; TANG Xiaowen; PAN Yujie
2007-01-01
Velocity volume processing (VVP) retrieval of single Doppler radar is an effective method which can be used to obtain many wind parameters. However, due to the problem of an ill-conditioned matrix arising from the coefficients of equations not being easily resolved, the VVP method has not been applied adequately and effectively in operation. In this paper, an improved scheme, SVVP (step velocity volume processing), based on the original method, is proposed. The improved algorithm retrieves each group of components of the wind field through a stepwise procedure, which overcomes the problem of an ill-conditioned matrix, which currently limits the application of the VVP method. Variables in a six-parameter model can be retrieved even if the analysis volume is very small. In addition, the source and order of errors which exist in the traditional method are analyzed. The improved method is applied to real cases, which show that it is robust and has the capability to obtain the wind field structure of the local convective system. It is very helpful for studying severe storms.
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics.In the present paper,the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced.The applications of the stochastic averaging method in studying the dynamics of active Brownian particles,the reaction rate theory,the dynamics of breathing and denaturation of DNA,and the Fermi resonance and its effect on the mean transition time are reviewed.
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
Institute of Scientific and Technical Information of China (English)
DENG MaoLin; ZHU WeiQiu
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for uasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics. In the present paper, the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced. The applications of the stochastic averaging method in studying the dynamics of active Brownian particles, the reaction rate theory, the dynamics of breathing and denaturation of DNA, and the Fermi resonance and its effect on the mean transition time are re-viewed.
Spectral (Finite) Volume Method for One Dimensional Euler Equations
Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)
2002-01-01
Consider a mesh of unstructured triangular cells. Each cell is called a Spectral Volume (SV), denoted by Si, which is further partitioned into subcells named Control Volumes (CVs), indicated by C(sub i,j). To represent the solution as a polynomial of degree m in two dimensions (2D) we need N = (m+1)(m+2)/2 pieces of independent information, or degrees of freedom (DOFs). The DOFs in a SV method are the volume-averaged mean variables at the N CVs. For example, to build a quadratic reconstruction in 2D, we need at least (2+1)(3+1)/2 = 6 DOFs. There are numerous ways of partitioning a SV, and not every partition is admissible in the sense that the partition may not be capable of producing a degree m polynomial. Once N mean solutions in the CVs of a SV are given, a unique polynomial reconstruction can be obtained.
The averaging of non-local Hamiltonian structures in Whitham's method
Maltsev, A Y
1999-01-01
We consider the m-phase Whitham's averaging method and propose the procedure of "averaging" of non-local Hamiltonian structures. The procedure is based on the existence of sufficient number of local commuting integrals of the system and gives the Poisson bracket of Ferapontov type for the Whitham system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.
Energy Technology Data Exchange (ETDEWEB)
Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)
2015-06-15
Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly
Directory of Open Access Journals (Sweden)
Björn eNitzsche
2015-06-01
Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Nitzsche, Björn; Frey, Stephen; Collins, Louis D; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H; Fonov, Vladimir S; Boltze, Johannes
2015-01-01
Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...... in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...
Directory of Open Access Journals (Sweden)
Suxiang He
2014-01-01
Full Text Available An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising.
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
Directory of Open Access Journals (Sweden)
P. Shanmugasundaram
2014-01-01
Full Text Available In this paper a revised Intuitionistic Fuzzy Max-Min Average Composition Method is proposed to construct the decision method for the selection of the professional students based on their skills by the recruiters using the operations of Intuitionistic Fuzzy Soft Matrices. In Shanmugasundaram et al. (2014, Intuitionistic Fuzzy Max-Min Average Composition Method was introduced and applied in Medical diagnosis problem. Sanchez’s approach (Sanchez (1979 for decision making is studied and the concept is modified for the application of Intuitionistic fuzzy soft set theory. Through a survey, the opportunities and selection of the students with the help of Intuitionistic fuzzy soft matrix operations along with Intuitionistic fuzzy max-min average composition method is discussed.
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see...... in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix K and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression c = u^\\T \\tilde{K}u $, where \\tilde{K} is different from K...
Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume
Berto, Filippo; Lazzarin, Paolo
2014-01-01
When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.
On simulating flow with multiple time scales using a method of averages
Energy Technology Data Exchange (ETDEWEB)
Margolin, L.G. [Los Alamos National Lab., NM (United States)
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his new method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.
Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente
2011-01-01
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
On Strong Convergence of Halpern’s Method Using Averaged Type Mappings
Directory of Open Access Journals (Sweden)
F. Cianciaruso
2014-01-01
Full Text Available Under suitable hypotheses on control coefficients, we study Halpern’s method to approximate strongly common fixed points of a nonexpansive mapping and of a nonspreading mapping or a fixed point of one of them. A crucial tool in our results is the regularization with the averaged type mappings.
Application of the Value Averaging Investment Method on the US Stock Market
Directory of Open Access Journals (Sweden)
Martin Širůček
2015-01-01
Full Text Available The paper focuses on empirical testing and the use of the regular investment, particularly on the value averaging investment method on real data from the US stock market in the years 1990–2013. The 23-year period was chosen because of a consistently interesting situation in the market and so this regular investment method could be tested to see how it works in a bull (expansion period and a bear (recession period. The analysis is focused on results obtained by using this investment method from the viewpoint of return and risk on selected investment horizons (short-term 1 year, medium-term 5 years and long-term 10 years. The selected aim is reached by using the ratio between profit and risk. The revenue-risk profile is the ratio of the average annual profit rate measured for each investment by the internal rate of return and average annual risk expressed by selective standard deviation. The obtained results show that regular investment is suitable for a long investment horizon or the longer the investment horizon, the better the revenue-risk ratio (Sharpe ratio. According to the results obtained, specific investment recommendations are presented in the conclusion, e.g. if this investment method is suitable for a long investment period, if it is better to use value averaging for a growing, sinking or sluggish market, etc.
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method
Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.
2017-08-01
This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.
Directory of Open Access Journals (Sweden)
Vladimir V. Lyubimov
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
An adaptive method of averaging the space-vectors location in DSP controlled drives
Energy Technology Data Exchange (ETDEWEB)
Debowski, A.; Chudzik, P. [Technical University of Lodz, Institute of Automatic Control, Lodz (Poland)
2000-08-01
In the paper a practical method of averaging the space- vector location for electrical drives controlled with digital signal processors (DSP) is demonstrated. This method enables to approximate the step movement of the given real space-vector with a smooth rotation of a conventional one in given time subintervals by any field rotation speed. The method is suitable for many practical applications in vector controlled electrical drives. In the paper some experimental examples of estimation the space-vectors of stator current and rotor flux in an inverter-fed induction motor drive are shown. (orig.)
Position error correction in absolute surface measurement based on a multi-angle averaging method
Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin
2017-04-01
We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.
Direct volume rendering methods for cell structures.
Martišek, Dalibor; Martišek, Karel
2012-01-01
The study of the complicated architecture of cell space structures is an important problem in biology and medical research. Optical cuts of cells produced by confocal microscopes enable two-dimensional (2D) and three-dimensional (3D) reconstructions of observed cells. This paper discuses new possibilities for direct volume rendering of these data. We often encounter 16 or more bit images in confocal microscopy of cells. Most of the information contained in these images is unsubstantial for the human vision. Therefore, it is necessary to use mathematical algorithms for visualization of such images. Present software tools as OpenGL or DirectX run quickly in graphic station with special graphic cards, run very unsatisfactory on PC without these cards and outputs are usually poor for real data. These tools are black boxes for a common user and make it impossible to correct and improve them. With the method proposed, more parameters of the environment can be set, making it possible to apply 3D filters to set the output image sharpness in relation to the noise. The quality of the output is incomparable to the earlier described methods and is worth increasing the computing time. We would like to offer mathematical methods of 3D scalar data visualization describing new algorithms that run on standard PCs very well.
Institute of Scientific and Technical Information of China (English)
ZHANG Jianyu; SHAN Meijuan; ZHAO Libin; FEI Binjun
2015-01-01
An average failure index method based on accurate FEA was proposed for the tensile strength prediction of composite out-of-plane adhesive-bondedπjoints. Based on the simple and independent maximum stress failure criterion, the failure index was introduced to characterize the degree of stress components close to their corresponding material strength. With a brief load transfer analysis, the weak fillers were prominent and further detailed discussion was performed. The maximum value among the average failure indices which were related with different stress components was filtrated to represent the failure strength of the critical surface, which is either the two curved upside surfaces or the bottom plane of the fillers for compositeπjoints. The tensile strength of three kinds ofπjoints with different material systems, configurations and lay-ups was predicted by the proposed method and corresponding experiments were conducted. Good agreements between the numerical and experimental results give evidence of the effectiveness of the proposed method. In contrast to the existed time-consuming strength prediction methods, the proposed method provides a capability of quickly assessing the failure of complex out-of-plane joints and is easy and convenient to be widely utilized in engineering.
Weighted Average Finite Difference Methods for Fractional Reaction-Subdiffusion Equation
Directory of Open Access Journals (Sweden)
Nasser Hassen SWEILAM
2014-04-01
Full Text Available In this article, a numerical study for fractional reaction-subdiffusion equations is introduced using a class of finite difference methods. These methods are extensions of the weighted average methods for ordinary (non-fractional reaction-subdiffusion equations. A stability analysis of the proposed methods is given by a recently proposed procedure similar to the standard John von Neumann stability analysis. Simple and accurate stability criterion valid for different discretization schemes of the fractional derivative, arbitrary weight factor, and arbitrary order of the fractional derivative, are given and checked numerically. Numerical test examples, figures, and comparisons have been presented for clarity.doi:10.14456/WJST.2014.50
A primal sub-gradient method for structured classification with the averaged sum loss
Directory of Open Access Journals (Sweden)
Mančev Dejan
2014-12-01
Full Text Available We present a primal sub-gradient method for structured SVM optimization defined with the averaged sum of hinge losses inside each example. Compared with the mini-batch version of the Pegasos algorithm for the structured case, which deals with a single structure from each of multiple examples, our algorithm considers multiple structures from a single example in one update. This approach should increase the amount of information learned from the example. We show that the proposed version with the averaged sum loss has at least the same guarantees in terms of the prediction loss as the stochastic version. Experiments are conducted on two sequence labeling problems, shallow parsing and part-of-speech tagging, and also include a comparison with other popular sequential structured learning algorithms.
Averaging methods for extracting representative waveforms from motor unit action potential trains.
Malanda, Armando; Navallas, Javier; Rodriguez-Falces, Javier; Rodriguez-Carreño, Ignacio; Gila, Luis
2015-08-01
In the context of quantitative electromyography (EMG), it is of major interest to obtain a waveform that faithfully represents the set of potentials that constitute a motor unit action potential (MUAP) train. From this waveform, various parameters can be determined in order to characterize the MUAP for diagnostic analysis. The aim of this work was to conduct a thorough, in-depth review, evaluation and comparison of state-of-the-art methods for composing waveforms representative of MUAP trains. We evaluated nine averaging methods: Ensemble (EA), Median (MA), Weighted (WA), Five-closest (FCA), MultiMUP (MMA), Split-sweep median (SSMA), Sorted (SA), Trimmed (TA) and Robust (RA) in terms of three general-purpose signal processing figures of merit (SPMF) and seven clinically-used MUAP waveform parameters (MWP). The convergence rate of the methods was assessed as the number of potentials per MUAP train (NPM) required to reach a level of performance that was not significantly improved by increasing this number. Test material comprised 78 MUAP trains obtained from the tibialis anterioris of seven healthy subjects. Error measurements related to all SPMF and MWP parameters except MUAP amplitude descended asymptotically with increasing NPM for all methods. MUAP amplitude showed a consistent bias (around 4% for EA and SA and 1-2% for the rest). MA, TA and SSMA had the lowest SPMF and MWP error figures. Therefore, these methods most accurately preserve and represent MUAP physiological information of utility in clinical medical practice. The other methods, particularly WA, performed noticeably worse. Convergence rate was similar for all methods, with NPM values averaged among the nine methods, which ranged from 10 to 40, depending on the waveform parameter evaluated.
A method for the estimation of p-mode parameters from averaged solar oscillation power spectra
Reiter, J; Kosovichev, A G; Schou, J; Scherrer, P H; Larson, T P
2015-01-01
A new fitting methodology is presented which is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from $m$-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the "Windowed, MuLTiple-Peak, averaged spectrum", or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run using weights from a leakage matrix that takes into account both observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method that employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure which is based upon 6,366 modes that we ha...
Searching-and-averaging method of underdetermined blind speech signal separation in time domain
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Underdetermined blind signal separation (BSS) (with fewer observed mixtures than sources) is discussed. A novel searching-and-averaging method in time domain (SAMTD) is proposed. It can solve a kind of problems that are very hard to solve by using sparse representation in frequency domain. Bypassing the disadvantages of traditional clustering (e.g., K-means or potential-function clustering), the durative- sparsity of a speech signal in time domain is used. To recover the mixing matrix, our method deletes those samples, which are not in the same or inverse direction of the basis vectors. To recover the sources, an improved geometric approach to overcomplete ICA (Independent Component Analysis) is presented. Several speech signal experiments demonstrate the good performance of the proposed method.
Institute of Scientific and Technical Information of China (English)
XIA Rui; ZHANG Yuan; ZHANG Meng-heng; LIU Ke-xin; WU Jie-yun; ZHENG Zhi-rong; GONG Yao
2015-01-01
Increasing incidents of indoor air quality (IAQ) related complaints lead us to the fact that IAQ has become a significant occupational health and environmental issue. However, how to effectively evaluate IAQ under different scale of multiple indicators is still a challenge. The traditional single-indicator method is subjected to uncertainties in assessing IAQ due to different subjectivity on good or bad quality and scalar differences of data set. In this study, a multilevel integrated weighted average IAQ method including initial walking through assessment (IWA) and two-layers weighted average method are developed and applied to evaluate IAQ of the laboratory building at the University of Regina in Canada. Some important chemical parameters related to IAQ in terms of volatile organic compounds (VOCs), methanol (HCHO), carbon dioxide (CO2), and carbon monoxide (CO) are evaluated based on 5 months continuous monitoring data. The new integrated assessment result can not only indicates the risk of an individual parameter, but also able to quantify the overall IAQ risk on the sampling site. Finally, some recommendations based on the result are proposed to address sustainable IAQ practices in the sampling area.
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Hozman, J.; Tichý, T.
2017-07-01
The paper is based on the results from our recent research on path-dependent multi-asset options. Here we focus on options, payoff of which depends on the difference of the spread of two underlying assets at expiry and their average spread during the life of the option. The main idea uses a concept of the dimensional reduction to the PDE model with only two spatial variables describing this option pricing problem. Then the numerical option pricing scheme arising from the discontinuous Galerkin method is developed. Finally, a simple numerical result is presented on real market data.
Gruber, Matthew; Fochesatto, Gilberto J.
2013-07-01
Scintillometer measurements of the turbulence inner-scale length l_o and refractive index structure function C_n^2 allow for the retrieval of large-scale area-averaged turbulent fluxes in the atmospheric surface layer. This retrieval involves the solution of the non-linear set of equations defined by the Monin-Obukhov similarity hypothesis. A new method that uses an analytic solution to the set of equations is presented, which leads to a stable and efficient numerical method of computation that has the potential of eliminating computational error. Mathematical expressions are derived that map out the sensitivity of the turbulent flux measurements to uncertainties in source measurements such as l_o. These sensitivity functions differ from results in the previous literature; the reasons for the differences are explored.
Gruber, Matthew
Scintillometer measurements of the turbulence inner-scale length lo and refractive index structure function C2n allow for the retrieval of large-scale area-averaged turbulent fluxes in the atmospheric surface layer. This retrieval involves the solution of the non-linear set of equations defined by the Monin-Obukhov similarity hypothesis. A new method that uses an analytic solution to the set of equations is presented, which leads to a stable and efficient numerical method of computation that has the potential of eliminating computational error. Mathematical expressions are derived that map out the sensitivity of the turbulent flux measurements to uncertainties in source measurements such as lo. These sensitivity functions differ from results in the previous literature; the reasons for the differences are explored.
Gruber, Matthew A
2013-01-01
Scintillometer measurements of the turbulence inner-scale length $l_o$ and refractive index structure function $C_n^2$ allow for the retrieval of large-scale area-averaged turbulent fluxes in the atmospheric surface layer. This retrieval involves the solution of the non-linear set of equations defined by the Monin-Obukhov similarity hypothesis. A new method that uses an analytic solution to the set of equations is presented, which leads to a stable and efficient numerical method of computation that has the potential of eliminating computational error. Mathematical expressions are derived that map out the sensitivity of the turbulent flux measurements to uncertainties in source measurements such as $l_o$. These sensitivity functions differ from results in the previous literature; the reasons for the differences are explored.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Holmqvist, Fredrik; Platonov, Pyotr G; Havmöller, Rasmus; Carlson, Jonas
2007-10-09
The study was designed to investigate the effect of different measuring methodologies on the estimation of P wave duration. The recording length required to ensure reproducibility in unfiltered, signal-averaged P wave analysis was also investigated. An algorithm for automated classification was designed and its reproducibility of manual P wave morphology classification investigated. Twelve-lead ECG recordings (1 kHz sampling frequency, 0.625 microV resolution) from 131 healthy subjects were used. Orthogonal leads were derived using the inverse Dower transform. Magnification (100 times), baseline filtering (0.5 Hz high-pass and 50 Hz bandstop filters), signal averaging (10 seconds) and bandpass filtering (40-250 Hz) were used to investigate the effect of methodology on the estimated P wave duration. Unfiltered, signal averaged P wave analysis was performed to determine the required recording length (6 minutes to 10 s) and the reproducibility of the P wave morphology classification procedure. Manual classification was carried out by two experts on two separate occasions each. The performance of the automated classification algorithm was evaluated using the joint decision of the two experts (i.e., the consensus of the two experts). The estimate of the P wave duration increased in each step as a result of magnification, baseline filtering and averaging (100 +/- 18 vs. 131 +/- 12 ms; P manual classification in 90% of the cases. The methodology used has profound effects on the estimation of P wave duration, and the method used must therefore be validated before any inferences can be made about P wave duration. This has implications in the interpretation of multiple studies where P wave duration is assessed, and conclusions with respect to normal values are drawn.P wave morphology and duration assessed using unfiltered, signal-averaged P wave analysis have high reproducibility, which is unaffected by the length of the recording. In the present study, the performance of
Improved method for measuring the ensemble average of strand breaks in genomic DNA.
Bespalov, V A; Conconi, A; Zhang, X; Fahy, D; Smerdon, M J
2001-01-01
The cis-syn cyclobutane pyrimidine dimer (CPD) is the major photoproduct induced in DNA by low wavelength ultraviolet radiation. An improved method was developed to detect CPD formation and removal in genomic DNA that avoids the problems encountered with the standard method of endonuclease detection of these photoproducts. Since CPD-specific endonucleases make single-strand cuts at CPD sites, quantification of the frequency of CPDs in DNA is usually done by denaturing gel electrophoresis. The standard method of ethidium bromide staining and gel photography requires more than 10 microg of DNA per gel lane, and correction of the photographic signal for the nonlinear film response. To simplify this procedure, a standard Southern blot protocol, coupled with phosphorimage analysis, was developed. This method uses random hybridization probes to detect genomic sequences with minimal sequence bias. Because of the vast linearity range of phosphorimage detection, scans of the signal profiles for the heterogeneous population of DNA fragments can be integrated directly to determine the number-average size of the population.
Design of a micro-irrigation system based on the control volume method
Directory of Open Access Journals (Sweden)
Chasseriaux G.
2006-01-01
Full Text Available A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fl uid hydrodynamicʼs are applied. Control volume method is an iterative method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average fl ow rate of the network. The design consists of exploring an economical and effi cient network to deliver uniformly the input fl ow rate for all emitters. This program permitted the design of a large complex network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for lateral design, the results from which were validated by other methods as fi nite element method, so it permits to determine the optimal design for such micro-irrigation network
Daud, Shahidah Md; Ramli, Razamin; Kasim, Maznah Mat; Kayat, Kalsom; Razak, Rafidah Abd
2015-12-01
Malaysian Homestay is very unique. It is classified as Community Based Tourism (CBT). Homestay Programme which is a community events where a tourist stays together with a host family for a period of time and enjoying cultural exchange besides having new experiences. Homestay programme has booming the tourism industry since there is over 100 Homestay Programme currently being registered with the Ministry of Culture and Tourism Malaysia. However, only few Homestay Programme enjoying the benefits of success Homestay Programme. Hence, this article seeks to identify the critical success factors for a Homestay Programme in Malaysia. An Arithmetic Average method is utilized to further evaluate the identified success factors in a more meaningful way. The findings will help Homestay Programme function as a community development tool that manages tourism resources. Thus, help the community in improving local economy and creating job opportunities.
Plyasunov, S
2005-01-01
This paper is concerned with classes of models of stochastic reaction dynamics with time-scales separation. We demonstrate that the existence of the time-scale separation naturally leads to the application of the averaging principle and elimination of degrees of freedom via the renormalization of transition rates of slow reactions. The method suggested in this work is more general than other approaches presented previously: it is not limited to a particular type of stochastic processes and can be applied to different types of processes describing fast dynamics, and also provides crossover to the case when separation of time scales is not well pronounced. We derive a family of exact fluctuation-dissipation relations which establish the connection between effective rates and the statistics of the reaction events in fast reaction channels. An illustration of the technique is provided. Examples show that renormalized transition rates exhibit in general non-exponential relaxation behavior with a broad range of pos...
Directory of Open Access Journals (Sweden)
Konings Maurits K
2012-08-01
Full Text Available Abstract Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP is presented, that measures the average stroke volume (SV for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo. These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds. The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling
Directory of Open Access Journals (Sweden)
Ahmed K. Hassan
2008-01-01
Full Text Available One of the serious problems in any wireless communication system using multi carrier modulation technique like Orthogonal Frequency Division Multiplexing (OFDM is its Peak to Average Power Ratio (PAPR.It limits the transmission power due to the limitation of dynamic range of Analog to Digital Converter and Digital to Analog Converter (ADC/DAC and power amplifiers at the transmitter, which in turn sets the limit over maximum achievable rate.This issue is especially important for mobile terminals to sustain longer battery life time. Therefore reducing PAPR can be regarded as an important issue to realize efficient and affordable mobile communication services.This paper presents an efficient PAPR reduction method for OFDM signal. This method is based on clipping and iterative processing. Iterative processing is performed to limit PAPR in time domain but the subtraction process of the peak that over PAPR threshold with the original signal is done in frequency domain, not in time like usual clipping technique. The results of this method is capable of reducing the PAPR significantly with minimum bit error rate (BER degradation.
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Sun; Jeong, Hyo Joon; Kim, Eun Han; Han, Moon Hee; Hwang, Won Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-09-15
This study analyzes the differences in the annual averaged atmospheric dispersion factor and ground deposition factor produced using two classification methods of atmospheric stability, which are based on a vertical temperature difference and the standard deviation of horizontal wind direction fluctuation. Daedeok and Wolsong nuclear sites were chosen for an assessment, and the meteorological data at 10 m were applied to the evaluation of atmospheric stability. The XOQDOQ software program was used to calculate atmospheric dispersion factors and ground deposition factors. The calculated distances were chosen at 400 m, 800 m, 1,200 m, 1,600 m, 2,400 m, and 3,200 m away from the radioactive material release points. All of the atmospheric dispersion factors generated using the atmospheric stability based on the vertical temperature difference were shown to be higher than those from the standard deviation of horizontal wind direction fluctuation. On the other hand, the ground deposition factors were shown to be same regardless of the classification method, as they were based on the graph obtained from empirical data presented in the Nuclear Regulatory Commission's Regulatory Guide 1.111, which is unrelated to the atmospheric stability for the ground level release. These results are based on the meteorological data collected over the course of one year at the specified sites; however, the classification method of atmospheric stability using the vertical temperature difference is expected to be more conservative.
MORTAR FINITE VOLUME METHOD WITH ADINI ELEMENT FOR BIHARMONIC PROBLEM
Institute of Scientific and Technical Information of China (English)
Chun-jia Bi; Li-kang Li
2004-01-01
In this paper, we construct and analyse a mortar finite volume method for the dis-cretization for the biharmonic problem in R2. This method is based on the mortar-type Adini nonconforming finite element spaces. The optimal order H2-seminorm error estimate between the exact solution and the mortar Adini finite volume solution of the biharmonic equation is established.
A Running Average Method for Predicting the Size and Length of a Solar Cycle
Institute of Scientific and Technical Information of China (English)
Zhan-Le Du; Hua-Ning Wang; Li-Yun Zhang
2008-01-01
The running correlation coefficient between the solar cycle amplitudes and the max-max cycle lengths at a given cycle lag is found to vary roughly in a cyclical wave with the cycle number, based on the smoothed monthly mean Group sunspot numbers available since 1610. A running average method is proposed to predict the size and length of a solar cycle by the use of the varying trend of the coefficients. It is found that, when a condition (that the correlation becomes stronger) is satisfied, the mean prediction error (16.1) is much smaller than when the condition is not satisfied (38.7). This result can be explained by the fact that the prediction must fall on the regression line and increase the strength of the correlation. The method itself can also indicate whether the prediction is reasonable or not. To obtain a reasonable prediction, it is more important to search.for a running correlation coefficient whose varying trend satisfies the proposed condition, and the result does not depend so much on the size of the correlation coefficient. As an application, the peak sunspot number of cycle 24 is estimated as 140.4±15.7, and the peak as May 2012± 11 months.
Cellwise conservative unsplit advection for the volume of fluid method
DEFF Research Database (Denmark)
Comminal, Raphaël; Spangenberg, Jon; Hattel, Jesper Henri
2015-01-01
We present a cellwise conservative unsplit (CCU) advection scheme for the volume of fluid method (VOF) in 2D. Contrary to other schemes based on explicit calculations of the flux balances, the CCU advection adopts a cellwise approach where the pre-images of the control volumes are traced backward......We present a cellwise conservative unsplit (CCU) advection scheme for the volume of fluid method (VOF) in 2D. Contrary to other schemes based on explicit calculations of the flux balances, the CCU advection adopts a cellwise approach where the pre-images of the control volumes are traced...
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
the proposition of a weight for averaging CDMA codes. This weighting function is referred in this discussion as the probability of the code matrix...Given a likelihood function of a multivariate Gaussian stochastic process (12), one can assume the values L and U and try to estimate the parameters...such as the average of the exponential functions were formulated. Averaging over a weight that depends on the TSC behaves as a filtering process where
Energy Technology Data Exchange (ETDEWEB)
Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.
2011-02-15
Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Device overlay method for high volume manufacturing
Lee, Honggoo; Han, Sangjun; Kim, Youngsik; Kim, Myoungsoo; Heo, Hoyoung; Jeon, Sanghuck; Choi, DongSub; Nabeth, Jeremy; Brinster, Irina; Pierson, Bill; Robinson, John C.
2016-03-01
Advancing technology nodes with smaller process margins require improved photolithography overlay control. Overlay control at develop inspection (DI) based on optical metrology targets is well established in semiconductor manufacturing. Advances in target design and metrology technology have enabled significant improvements in overlay precision and accuracy. One approach to represent in-die on-device as-etched overlay is to measure at final inspection (FI) with a scanning electron microscope (SEM). Disadvantages to this approach include inability to rework, limited layer coverage due to lack of transparency, and higher cost of ownership (CoO). A hybrid approach is investigated in this report whereby infrequent DI/FI bias is characterized and the results are used to compensate the frequent DI overlay results. The bias characterization is done on an infrequent basis, either based on time or triggered from change points. On a per-device and per-layer basis, the optical target overlay at DI is compared with SEM on-device overlay at FI. The bias characterization results are validated and tracked for use in compensating the DI APC controller. Results of the DI/FI bias characterization and sources of variation are presented, as well as the impact on the DI correctables feeding the APC system. Implementation details in a high volume manufacturing (HVM) wafer fab will be reviewed. Finally future directions of the investigation will be discussed.
SET OPERATOR-BASED METHOD OF DENOISING MEDICAL VOLUME DATA
Institute of Scientific and Technical Information of China (English)
程兵; 郑南宁; 袁泽剑
2002-01-01
Objective To investigate impulsive noise suppression of medical volume data. Methods The volume data is represented as level sets and a special set operator is defined and applied to filtering it. The small connected components, which are likely to be produced by impulsive noise, are eliminated after the filtering process. A fast algorithm that uses a heap data structure is also designed. Results Compared with traditional linear filters such as a Gaussian filter, this method preserves the fine structure features of the medical volume data while removing noise, and the fast algorithm developed by us reduces memory consumption and improves computing efficiency. The experimental results given illustrate the efficiency of the method and the fast algorithm. Conclusion The set operator-based method shows outstanding denoising properties in our experiment, especially for impulsive noise. The method has a wide variety of applications in the areas of volume visualization and high dimensional data processing.
[A hybrid volume rendering method using general hardware].
Li, Bin; Tian, Lianfang; Chen, Ping; Mao, Zongyuan
2008-06-01
In order to improve the effect and efficiency of the reconstructed image after hybrid volume rendering of different kinds of volume data from medical sequential slices or polygonal models, we propose a hybrid volume rendering method based on Shear-Warp with economical hardware. First, the hybrid volume data are pre-processed by Z-Buffer method and RLE (Run-Length Encoded) data structure. Then, during the process of compositing intermediate image, a resampling method based on the dual-interpolation and the intermediate slice interpolation methods is used to improve the efficiency and the effect. Finally, the reconstructed image is rendered by the texture-mapping technology of OpenGL. Experiments demonstrate the good performance of the proposed method.
Simulating hydroplaning of submarine landslides by quasi 3D depth averaged finite element method
De Blasio, Fabio; Battista Crosta, Giovanni
2014-05-01
G.B. Crosta, H. J. Chen, and F.V. De Blasio Dept. Of Earth and Environmental Sciences, Università degli Studi di Milano Bicocca, Milano, Italy Klohn Crippen Berger, Calgary, Canada Subaqueous debris flows/submarine landslides, both in the open ocean as well as in fresh waters, exhibit extremely high mobility, quantified by a ratio between vertical to horizontal displacement of the order 0.01 or even much less. It is possible to simulate subaqueous debris flows with small-scale experiments along a flume or a pool using a cohesive mixture of clay and sand. The results have shown a strong enhancement of runout and velocity compared to the case in which the same debris flow travels without water, and have indicated hydroplaning as a possible explanation (Mohrig et al. 1998). Hydroplaning is started when the snout of the debris flow travels sufficiently fast. This generates lift forces on the front of the debris flow exceeding the self-weight of the sediment, which so begins to travel detached from the bed, literally hovering instead of flowing. Clearly, the resistance to flow plummets because drag stress against water is much smaller than the shear strength of the material. The consequence is a dramatic increase of the debris flow speed and runout. Does the process occur also for subaqueous landslides and debris flows in the ocean, something twelve orders of magnitude larger than the experimental ones? Obviously, no experiment will ever be capable to replicate this size, one needs to rely on numerical simulations. Results extending a depth-integrated numerical model for debris flows (Imran et al., 2001) indicate that hydroplaning is possible (De Blasio et al., 2004), but more should be done especially with alternative numerical methodologies. In this work, finite element methods are used to simulate hydroplaning using the code MADflow (Chen, 2014) adopting a depth averaged solution. We ran some simulations on the small scale of the laboratory experiments, and secondly
Research on Canal System Operation Based on Controlled Volume Method
Directory of Open Access Journals (Sweden)
Zhiliang Ding
2009-10-01
Full Text Available An operating simulation mode based on storage volume control method for multireach canal system in series was established. In allusion to the deficiency of existing controlled volume algorithm, the improved algorithm was proposed, that is the controlled volume algorithm of whole canal pools, the simulation results indicate that the storage volume and water level of each canal pool can be accurately controlled after the improved algorithm was adopted. However, for some typical discharge demand change operating conditions of canal, if the controlled volume algorithm of whole canal pool is still adopted, then it certainly will cause some unnecessary regulation, and consequently increases the disturbed canal reaches. Therefor, the idea of controlled volume operation method of continuous canal pools was proposed, and its algorithm was designed. Through simulation to practical project, the results indicate that the new controlled volume algorithm proposed for typical operating condition can comparatively obviously reduce the number of regulated check gates and disturbed canal pools for some typical discharge demand change operating conditions of canal, thus the control efficiency of canal system was improved. The controlled volume method of operation is specially suitable for large-scale water delivery canal system which possesses complex operation requirements.
Palatine tonsil volume estimation using different methods after tonsillectomy.
Sağıroğlu, Ayşe; Acer, Niyazi; Okuducu, Hacı; Ertekin, Tolga; Erkan, Mustafa; Durmaz, Esra; Aydın, Mesut; Yılmaz, Seher; Zararsız, Gökmen
2016-06-15
This study was carried out to measure the volume of the palatine tonsil in otorhinolaryngology outpatients with complaints of adenotonsillar hypertrophy and chronic tonsillitis who had undergone tonsillectomy. To date, no study has investigated palatine tonsil volume using different methods and compared with subjective tonsil size in the literature. For this purpose, we used three different methods to measure palatine tonsil volume. The correlation of each parameter with tonsil size was assessed. After tonsillectomy, palatine tonsil volume was measured by Archimedes, Cavalieri and Ellipsoid methods. Mean right-left palatine tonsil volumes were calculated as 2.63 ± 1.34 cm(3) and 2.72 ± 1.51 cm(3) by the Archimedes method, 3.51 ± 1.48 cm(3) and 3.37 ± 1.36 cm(3) by the Cavalieri method, and 2.22 ± 1.22 cm(3) and 2.29 ± 1.42 cm(3) by the Ellipsoid method, respectively. Excellent agreement was found among the three methods of measuring volumetric techniques according to Bland-Altman plots. In addition, tonsil grade was correlated significantly with tonsil volume.
FINITE VOLUME METHOD OF MODELLING TRANSIENT GROUNDWATER FLOW
Directory of Open Access Journals (Sweden)
N. Muyinda
2014-01-01
Full Text Available In the field of computational fluid dynamics, the finite volume method is dominant over other numerical techniques like the finite difference and finite element methods because the underlying physical quantities are conserved at the discrete level. In the present study, the finite volume method is used to solve an isotropic transient groundwater flow model to obtain hydraulic heads and flow through an aquifer. The objective is to discuss the theory of finite volume method and its applications in groundwater flow modelling. To achieve this, an orthogonal grid with quadrilateral control volumes has been used to simulate the model using mixed boundary conditions from Bwaise III, a Kampala Surburb. Results show that flow occurs from regions of high hydraulic head to regions of low hydraulic head until a steady head value is achieved.
Comparison of different precondtioners for nonsymmtric finite volume element methods
Energy Technology Data Exchange (ETDEWEB)
Mishev, I.D.
1996-12-31
We consider a few different preconditioners for the linear systems arising from the discretization of 3-D convection-diffusion problems with the finite volume element method. Their theoretical and computational convergence rates are compared and discussed.
Yao, Dezhong
2017-02-14
Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.
Directory of Open Access Journals (Sweden)
Okuda Miyuki
2012-09-01
Full Text Available Abstract Introduction We were able to treat a patient with acute exacerbation of chronic obstructive pulmonary disease who also suffered from sleep-disordered breathing by using the average volume-assured pressure support mode of a Respironics V60 Ventilator (Philips Respironics: United States. This allows a target tidal volume to be set based on automatic changes in inspiratory positive airway pressure. This removed the need to change the noninvasive positive pressure ventilation settings during the day and during sleep. The Respironics V60 Ventilator, in the average volume-assured pressure support mode, was attached to our patient and improved and stabilized his sleep-related hypoventilation by automatically adjusting force to within an acceptable range. Case presentation Our patient was a 74-year-old Japanese man who was hospitalized for treatment due to worsening of dyspnea and hypoxemia. He was diagnosed with acute exacerbation of chronic obstructive pulmonary disease and full-time biphasic positive airway pressure support ventilation was initiated. Our patient was temporarily provided with portable noninvasive positive pressure ventilation at night-time following an improvement in his condition, but his chronic obstructive pulmonary disease again worsened due to the recurrence of a respiratory infection. During the initial exacerbation, his tidal volume was significantly lower during sleep (378.9 ± 72.9mL than while awake (446.5 ± 63.3mL. A ventilator that allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range was attached in average volume-assured pressure support mode, improving his sleep-related hypoventilation, which is often associated with the use of the Respironics V60 Ventilator. Polysomnography performed while our patient was on noninvasive positive pressure ventilation revealed obstructive sleep apnea syndrome (apnea-hypopnea index = 14, suggesting that his chronic
Limit cycles from a cubic reversible system via the third-order averaging method
Directory of Open Access Journals (Sweden)
Linping Peng
2015-04-01
Full Text Available This article concerns the bifurcation of limit cycles from a cubic integrable and non-Hamiltonian system. By using the averaging theory of the first and second orders, we show that under any small cubic homogeneous perturbation, at most two limit cycles bifurcate from the period annulus of the unperturbed system, and this upper bound is sharp. By using the averaging theory of the third order, we show that two is also the maximal number of limit cycles emerging from the period annulus of the unperturbed system.
PARTIALLY AVERAGED NAVIER-STOKES METHOD FOR TIME DEPENDENT TURBULENT CAVITATING FLOWS
Institute of Scientific and Technical Information of China (English)
HUANG Biao; WANG Guo-yu
2011-01-01
Cavitation typically occurs when the fluid pressure is lower than the vapor pressure in a local thermodynamic state, and the flow is frequently unsteady and turbulent. The Reynolds-Averaged Navier-Stokes (RANS) approach has been popular for turbulent flow computations. The most widely used ones, such as the standard k-ε model, have well-recognized deficiencies when treating time dependent flow field. To identify ways to improve the predictive capability of the current RANS-based engineering turbulence closures, conditional averaging is adopted for the Navier-Stokes equation, and one more parameter, based on the filter size, is introduced into the k-ε model. In the Partially Averaged Navier-Stokes (PANS) model, the filter width is mainly controlled by the ratio of unresolved-to-total kinetic energy f1. This model is assessed in unsteady cavitating flows over a Clark-Y hydrofoil. From the experimental validations regarding the forces, frequencies, cavity visualizations and velocity distributions, the PANS model is shown to improve the predictive capability considerably, in comparison to the standard k-ε model, and also, it is observed the value of f1 in the PANS model has substantial influence on the predicting result. As the filter width f1 is decreased, the PANS model can effectively reduce the eddy viscosity near the closure region which can significantly influence the capture of the detach cavity, and this model can reproduce the time-averaged velocity quantitatively around the hydrofoil.
Institute of Scientific and Technical Information of China (English)
丁瑞强; 李建平
2011-01-01
In this paper,taking the Lorenz system as an example,we compare the influences of the arithmetic mean and the geometric mean on measuring the global and local average error growth.The results show that the geometric mean error (GME) has a smoother growth than the arithmetic mean error (AME) for the global average error growth,and the GME is directly related to the maximal Lyapunov exponent,but the AME is not,as already noted by Krishnamurthy in 1993.Besides these,the GME is shown to be more appropriate than the AME in measuring the mean error growth in terms of the probability distribution of errors.The physical meanings of the saturation levels of the AME and the GME are also shown to be different.However,there is no obvious difference between the local average error growth with the arithmetic mean and the geometric mean,indicating that the choices of the AME or the GME have no influence on the measure of local average predictability.
Comparison of point forecast accuracy of model averaging methods in hydrologic applications
Diks, C.G.H.; Vrugt, J.A.
2010-01-01
Multi-model averaging is currently receiving a surge of attention in the atmospheric, hydrologic, and statistical literature to explicitly handle conceptual model uncertainty in the analysis of environmental systems and derive predictive distributions of model output. Such density forecasts are nece
Computational Methods in Stochastic Dynamics Volume 2
Stefanou, George; Papadopoulos, Vissarion
2013-01-01
The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology. This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...
Vassiliou, Vassilios S; Wassilew, Katharina; Cameron, Donnie; Heng, Ee Ling; Nyktari, Evangelia; Asimakopoulos, George; de Souza, Anthony; Giri, Shivraman; Pierce, Iain; Jabbour, Andrew; Firmin, David; Frenneaux, Michael; Gatehouse, Peter; Pennell, Dudley J; Prasad, Sanjay K
2017-06-12
Our objectives involved identifying whether repeated averaging in basal and mid left ventricular myocardial levels improves precision and correlation with collagen volume fraction for 11 heartbeat MOLLI T 1 mapping versus assessment at a single ventricular level. For assessment of T 1 mapping precision, a cohort of 15 healthy volunteers underwent two CMR scans on separate days using an 11 heartbeat MOLLI with a 5(3)3 beat scheme to measure native T 1 and a 4(1)3(1)2 beat post-contrast scheme to measure post-contrast T 1, allowing calculation of partition coefficient and ECV. To assess correlation of T 1 mapping with collagen volume fraction, a separate cohort of ten aortic stenosis patients scheduled to undergo surgery underwent one CMR scan with this 11 heartbeat MOLLI scheme, followed by intraoperative tru-cut myocardial biopsy. Six models of myocardial diffuse fibrosis assessment were established with incremental inclusion of imaging by averaging of the basal and mid-myocardial left ventricular levels, and each model was assessed for precision and correlation with collagen volume fraction. A model using 11 heart beat MOLLI imaging of two basal and two mid ventricular level averaged T 1 maps provided improved precision (Intraclass correlation 0.93 vs 0.84) and correlation with histology (R (2) = 0.83 vs 0.36) for diffuse fibrosis compared to a single mid-ventricular level alone. ECV was more precise and correlated better than native T 1 mapping. T 1 mapping sequences with repeated averaging could be considered for applications of 11 heartbeat MOLLI, especially when small changes in native T 1/ECV might affect clinical management.
Hydrothermal analysis in engineering using control volume finite element method
Sheikholeslami, Mohsen
2015-01-01
Control volume finite element methods (CVFEM) bridge the gap between finite difference and finite element methods, using the advantages of both methods for simulation of multi-physics problems in complex geometries. In Hydrothermal Analysis in Engineering Using Control Volume Finite Element Method, CVFEM is covered in detail and applied to key areas of thermal engineering. Examples, exercises, and extensive references are used to show the use of the technique to model key engineering problems such as heat transfer in nanofluids (to enhance performance and compactness of energy systems),
[Weighted-averaging multi-planar reconstruction method for multi-detector row computed tomography].
Aizawa, Mitsuhiro; Nishikawa, Keiichi; Sasaki, Keita; Kobayashi, Norio; Yama, Mitsuru; Sano, Tsukasa; Murakami, Shin-ichi
2012-01-01
Development of multi-detector row computed tomography (MDCT) has enabled three-dimensions (3D) scanning with minute voxels. Minute voxels improve spatial resolution of CT images. At the same time, however, they increase image noise. Multi-planar reconstruction (MPR) is one of effective 3D-image processing techniques. The conventional MPR technique can adjust slice thickness of MPR images. When a thick slice is used, the image noise is decreased. In this case, however, spatial resolution is deteriorated. In order to deal with this trade-off problem, we have developed the weighted-averaging multi-planar reconstruction (W-MPR) technique to control the balance between the spatial resolution and noise. The weighted-average is determined by the Gaussian-type weighting function. In this study, we compared the performance of W-MPR with that of conventional simple-addition-averaging MPR. As a result, we could confirm that W-MPR can decrease the image noise without significant deterioration of spatial resolution. W-MPR can adjust freely the weight for each slice by changing the shape of the weighting function. Therefore, W-MPR can allow us to select a proper balance of spatial resolution and noise and at the same time produce suitable MPR images for observation of targeted anatomical structures.
On-line Measuring Method for Shell Chamber Volume
Institute of Scientific and Technical Information of China (English)
ZHANG Li-zhong; WANG De-min; JIANG Tao; CAO Guo-hua; WANG Qi
2005-01-01
Using the ideal gas state equation, an on-line measuring method for the shell chamber volume is studied in this paper. After analyzing how various measurement parameters affect the measurement accuracy, the system parameters are optimized in this method. Because the shape and volume of the tested items are similar, the method of using "tamping" to raise the accuracy and speed of the measurement is put forward. Based on the work above, a prototype of the testing instrument for shell chamber volume was developed, automatically testing and controlling. Compared with the method of "water weight", this method is more accurate, quicker and more automotive, so it is adaptable for the use of on-line detection.
Acker, James G.; Uz, Stephanie Schollaert; Shen, Suhung; Leptoukh, Gregory G.
2010-01-01
Application of appropriate spatial averaging techniques is crucial to correct evaluation of ocean color radiometric data, due to the common log-normal or mixed log-normal distribution of these data. Averaging method is particularly crucial for data acquired in coastal regions. The effect of averaging method was markedly demonstrated for a precipitation-driven event on the U.S. Northeast coast in October-November 2005, which resulted in export of high concentrations of riverine colored dissolved organic matter (CDOM) to New York and New Jersey coastal waters over a period of several days. Use of the arithmetic mean averaging method created an inaccurate representation of the magnitude of this event in SeaWiFS global mapped chl a data, causing it to be visualized as a very large chl a anomaly. The apparent chl a anomaly was enhanced by the known incomplete discrimination of CDOM and phytoplankton chlorophyll in SeaWiFS data; other data sources enable an improved characterization. Analysis using the geometric mean averaging method did not indicate this event to be statistically anomalous. Our results predicate the necessity of providing the geometric mean averaging method for ocean color radiometric data in the Goddard Earth Sciences DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni).
Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks
Directory of Open Access Journals (Sweden)
Shen-Chun Wu
2003-01-01
Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.
Comparison of methods to quantify volume during resistance exercise.
McBride, Jeffrey M; McCaulley, Grant O; Cormie, Prue; Nuzzo, James L; Cavill, Michael J; Triplett, N Travis
2009-01-01
The purpose of this investigation was to compare 4 different methods of calculating volume when comparing resistance exercise protocols of varying intensities. Ten Appalachian State University students experienced in resistance exercise completed 3 different resistance exercise protocols on different days using a randomized, crossover design, with 1 week of rest between each protocol. The protocols included 1) hypertrophy: 4 sets of 10 repetitions in the squat at 75% of a 1-repetition maximum (1RM) (90-second rest periods); 2) strength: 11 sets of 3 repetitions at 90% 1RM (5-minute rest periods); and 3) power: 8 sets of 6 repetitions of jump squats at 0% 1RM (3-minute rest periods). The volume of resistance exercise completed during each protocol was determined with 4 different methods: 1) volume load (VL) (repetitions [no.] x external load [kg]); 2) maximum dynamic strength volume load (MDSVL) (repetitions [no.] x [body mass--shank mass (kg) + external load (kg)]); 3) time under tension (TUT) (eccentric time +milliseconds] + concentric time +milliseconds]); and 4) total work (TW) (force [N] x displacement [m]). The volumes differed significantly (p , 0.05) between hypertrophy and strength in comparison with the power protocol when VL and MDSVL were used to determine the volume of resistance exercise completed. Furthermore, significant differences in TUT existed between all 3 resistance exercise protocols. The TW calculated was not significantly different between the 3 protocols. These data imply that each method examined results in substantially different values when comparing various resistance exercise protocols involving different levels of intensity.
Volume Sculpting Using the Level-Set Method
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Christensen, Niels Jørgen
2002-01-01
In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....
Estimation of Extreme Values by the Average Conditional Exceedance Rate Method
Directory of Open Access Journals (Sweden)
A. Naess
2013-01-01
Full Text Available This paper details a method for extreme value prediction on the basis of a sampled time series. The method is specifically designed to account for statistical dependence between the sampled data points in a precise manner. In fact, if properly used, the new method will provide statistical estimates of the exact extreme value distribution provided by the data in most cases of practical interest. It avoids the problem of having to decluster the data to ensure independence, which is a requisite component in the application of, for example, the standard peaks-over-threshold method. The proposed method also targets the use of subasymptotic data to improve prediction accuracy. The method will be demonstrated by application to both synthetic and real data. From a practical point of view, it seems to perform better than the POT and block extremes methods, and, with an appropriate modification, it is directly applicable to nonstationary time series.
High order finite volume methods for singular perturbation problems
Institute of Scientific and Technical Information of China (English)
CHEN ZhongYing; HE ChongNan; WU Bin
2008-01-01
In this paper we establish a high order finite volume method for the fourth order singular perturbation problems. In conjunction with the optimal meshes, the numerical solutions resulting from the method have optimal convergence order. Numerical experiments are presented to verify our theoretical estimates.
The volume of fluid method in spherical coordinates
Janse, A.M.C.; Dijk, P.E.; Kuipers, J.A.M.
2000-01-01
The volume of fluid (VOF) method is a numerical technique to track the developing free surfaces of liquids in motion. This method can, for example, be applied to compute the liquid flow patterns in a rotating cone reactor. For this application a spherical coordinate system is most suited. The novel
Institute of Scientific and Technical Information of China (English)
ZHU HaiPing; HOU QinFu; ZHOU ZongYan; YU AiBing
2009-01-01
A particulate system can be described through the discrete approach at the microscopic level or through the continuum approach at the macroscopic level. It is very significant to develop the method to link the two approaches for the development of models allowing a better understanding of the fun-damentals of particulate systems. Several averaging methods have been proposed for this purpose in the past, but they mainly focused on cohesionless particle systems. In this work, a more general av-eraging method is proposed by extending it for cohesionless particle systems. The application of the method to the particle-fluid flow in a gas fluidized bed is studied. The density, velocity and stress of this flow are examined. A detailed discussion has been conducted to understand the dependence of the averaged variables on sample size.
Xu, Peng; Yao, Dezhong; Luo, Fen
2005-08-01
The registration method based on mutual information is currently a popular technique for the medical image registration, but the computation for the mutual information is complex and the registration speed is slow. In engineering process, a subsampling technique is taken to accelerate the registration speed at the cost of registration accuracy. In this paper a new method based on statistics sample theory is developed, which has both a higher speed and a higher accuracy as compared with the normal subsampling method, and the simulation results confirm the validity of the new method.
Halassi, A.; Ouazar, D.; Taik, A.
2015-10-01
A vertical 2Dxz laterally averaged hydrodynamic model is presented in this paper to study the aeration process in lakes. The system exhibits highly nonlinear behaviour due to the phenomena involved such as stratification, air concentration, and convective terms. The suggested model is used to simulate mechanical aeration to overcome and prevent the eutrophication in lakes. The multiquadric radial basis functions are used to solve numerically the governing partial differential equations. Because of the difficulty and the complexity when choosing a suitable shape parameter in radial basis functions, an alternative way is introduced in this work to overcome these difficulties. A validation study is carried out using several test examples, including Poisson, Navier-Stokes and transport equations. Finally, the proposed model is first applied to simulate a squared domain aeration problem and then a real test case has been considered. The obtained results are in good agreement with the results reported in the literature.
Xue, Ya-juan; Cao, Jun-xing; Du, Hao-kun; Zhang, Gu-lan; Yao, Yao
2016-09-01
Empirical mode decomposition (EMD)-based spectral decomposition methods have been successfully used for hydrocarbon detection. However, mode mixing that occurs during the sifting process of EMD causes the 'true' intrinsic mode function (IMF) to be extracted incorrectly and blurs the physical meaning of the IMF. We address the issue of how the mode mixing influences the EMD-based methods for hydrocarbon detection by introducing mode-mixing elimination methods, specifically ensemble EMD (EEMD) and complete ensemble EMD (CEEMD)-based highlight volumes, as feasible tools that can identify the peak amplitude above average volume and the peak frequency volume. Three schemes, that is, using all IMFs, selected IMFs or weighted IMFs, are employed in the EMD-, EEMD- and CEEMD-based highlight volume methods. When these methods were applied to seismic data from a tight sandstone gas field in Central Sichuan, China, the results demonstrated that the amplitude anomaly in the peak amplitude above average volume captured by EMD, EEMD and CEEMD combined with Hilbert transforms, whether using all IMFs, selected IMFs or weighted IMFs, are almost identical to each other. However, clear distinctions can be found in the peak frequency volume when comparing results generated using all IMFs, selected IMFs, or weighted IMFs. If all IMFs are used, the influence of mode mixing on the peak frequency volume is not readily discernable. However, using selected IMFs or a weighted IMFs' scheme affects the peak frequency in relation to the reservoir thickness in the EMD-based method. Significant improvement in the peak frequency volume can be achieved in EEMD-based highlight volumes using selected IMFs. However, if the weighted IMFs' scheme is adopted (i.e., if the undesired IMFs are included with reduced weights rather than excluded from the analysis entirely), the CEEMD-based peak frequency volume provides a more accurate reservoir thickness estimate compared with the other two methods. This
DEFF Research Database (Denmark)
Iversen, Theis Faber Quist; Hanson, Steen Grüner; Kirkegaard, Peter
2009-01-01
Micro-optical elements are of great importance in areas of optoelectronics and information processing. Establishing fast, reliable methods for characterization and quality control of these elements is important in order to maintain the optical performance in a high volume production process. We i...
Preserving energy resp. dissipation in numerical PDEs using the "Average Vector Field" method
Celledoni, E; McLachlan, R I; McLaren, D I; O'Neale, D; Owren, B; Quispel, G R W
2012-01-01
We give a systematic method for discretizing Hamiltonian partial differential equations (PDEs) with constant symplectic structure, while preserving their energy exactly. The same method, applied to PDEs with constant dissipative structure, also preserves the correct monotonic decrease of energy. The method is illustrated by many examples. In the Hamiltonian case these include: the sine-Gordon, Korteweg-de Vries, nonlinear Schrodinger, (linear) time-dependent Schrodinger, and Maxwell equations. In the dissipative case the examples are: the Allen-Cahn, Cahn-Hilliard, Ginzburg-Landau, and heat equations.
Huang, Lei
2015-09-30
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
Study of runaway electrons using the conditional average sampling method in the Damavand tokamak
Energy Technology Data Exchange (ETDEWEB)
Pourshahab, B., E-mail: bpourshahab@gmail.com [University of Isfahan, Department of Nuclear Engineering, Faculty of Advance Sciences and Technologies (Iran, Islamic Republic of); Sadighzadeh, A. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of); Abdi, M. R., E-mail: r.abdi@phys.ui.ac.ir [University of Isfahan, Department of Physics, Faculty of Science (Iran, Islamic Republic of); Rasouli, C. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of)
2017-03-15
Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail–Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.
Study of runaway electrons using the conditional average sampling method in the Damavand tokamak
Pourshahab, B.; Sadighzadeh, A.; Abdi, M. R.; Rasouli, C.
2017-03-01
Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail-Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.
Phase-rectified signal averaging as a new method for surveillance of growth restricted fetuses.
Lobmaier, S M; Huhn, E A; Pildner von Steinburg, S; Müller, A; Schuster, T; Ortiz, J U; Schmidt, G; Schneider, K T
2012-12-01
This study aims to compare average acceleration capacity (AAC), a new parameter to assess the dynamic capacity of the fetal autonomous nervous system, and short term variation (STV) in fetuses affected by intrauterine growth restriction (IUGR) and healthy fetuses. A prospective observational study was performed, including 39 women with IUGR singleton pregnancies (estimated fetal weight 95th percentile) and 43 healthy control pregnancies matched according to gestational age at recording. Ultrasound biometries and Doppler examination were performed for identification of IUGR and control fetuses, with subsequent analysis of fetal heart rate, resulting in STV and AAC. Follow-up for IUGR and control pregnancies was done, with perinatal outcome variables recorded. AAC [IUGR mean value 2.0 bpm (interquartile range = 1.6-2.1), control 2.7 bpm (2.6-3.0)] differentiates better than STV [IUGR 7.4 ms (5.3-8.9), control 10.9 ms (9.2-12.7)] between IUGR and control. The area under the curve for AAC is 97 % [95% CI = (0.95-1.0)], for STV 85 % (CI = 0.76-0.93; p < 0.01). Positive predictive value for STV is 77% and negative predictive value is 81%. For AAC both positive and negative predictive values are 90%. AAC shows an improvement to discriminate between normal and compromised fetuses at a single moment in time, in comparison with STV.
A new method for the measurement of meteorite bulk volume via ideal gas pycnometry
Li, Shijie; Wang, Shijie; Li, Xiongyao; Li, Yang; Liu, Shen; Coulson, Ian M.
2012-10-01
To date, of the many techniques used to measure the bulk volume of meteorites, only three methods (Archimedean bead method, 3-D laser imaging and X-ray microtomography) can be considered as nondestructive or noncontaminating. The bead method can show large, random errors for sample sizes of smaller than 5 cm3. In contrast, 3-D laser imaging is a high-accuracy method even when measuring the bulk volumes of small meteorites. This method is both costly and time consuming, however, and meteorites of a certain shape may lead to some uncertainties in the analysis. The method of X-ray microtomography suffers from the same problems as 3-D laser imaging. This study outlines a new method of high-accuracy, nondestructive and noncontaminating measurement of the bulk volume of meteorite samples. In order to measure the bulk volume of a meteorite, one must measure the total volume of the balloon vacuum packaged meteorite and the volume of balloon that had been used to enclose the meteorite using ideal gas pycnometry. The difference between the two determined volumes is the bulk volume of the meteorite. Through the measurement of zero porosity metal spheres and tempered glass fragments, our results indicate that for a sample which has a volume of between 0.5 and 2 cm3, the relative error of the measurement is less than ±0.6%. Furthermore, this error will be even smaller (less than ±0.1%) if the determined sample size is larger than 2 cm3. The precision of this method shows some volume dependence. For samples smaller than 1 cm3, the standard deviations are less than ±0.328%, and these values will fall to less than ±0.052% for samples larger than 2 cm3. The porosities of nine fragments of Jilin, GaoGuenie, Zaoyang and Zhaodong meteorites have been measured using our vacuum packaging-pycnometry method, with determined average porosities of Jilin, GaoGuenie, Zaoyang and Zhaodong of 9.0307%, 2.9277%, 17.5437% and 5.9748%, respectively. These values agree well with the porosities
Khamatnurova, M. Y.; Gribanov, K. G.
2015-11-01
Levenberg-Marquardt method parameter selection for methane vertical profile retrieval from IASI/METOP spectra is presented. A modified technique for iterative calculation of averaging kernels and a posteriori errors for every spectrum is suggested. Known from literature method is expanded for the case of absence of a priori statistics for methane vertical profiles. Software for massive processing of IASI spectra using is developed. Effect of LM parameter selection on averaging kernel norm and a posteriori errors is illustrated. NCEP reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval.
Institute of Scientific and Technical Information of China (English)
Yongjun Wu; Wang Fang
2008-01-01
The first-passage statistics of Duffing-Rayleigh-Mathieu system under wide-band colored noise excitations is studied by using stochastic averaging method. The motion equation of the original system is transformed into two time homogeneous diffusion Markovian processes of amplitude and phase after stochastic averaging. The diffusion process method for first-passage problem is used and the correspon-ding backward Kolmogorov equation and Pontryagin equa-tion are constructed and solved to yield the conditional reliability function and mean first-passage time with suitable initial and boundary conditions. The analytical results are confirmed by Monte Carlo simulation.
Method for measuring anterior chamber volume by image analysis
Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli
2007-12-01
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.
The efficiency of the centroid method compared to a simple average
DEFF Research Database (Denmark)
Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke
Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....
Energy Technology Data Exchange (ETDEWEB)
Silva, Cleomacio Miguel da; Amaral, Romilton dos Santos; Santos Junior, Jose Araujo dos; Vieira, Jose Wilson; Leoterio, Dilmo Marques da Silva [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Radioecologia (RAE)], E-mail: cleomaciomiguel@yahoo.com.br; Amaral, Ademir [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Estudos em Radioprotecao e Radioecologia
2007-07-01
The distribution of natural radionuclides in samples from typically anomalous environments has generally a great significant asymmetry, as a result of outlier. For diminishing statistic fluctuation researchers, in radioecology, commonly use geometric mean or median, once the average has no stability under the effect of outliers. As the median is not affected by anomalous values, this parameter of central tendency is the most frequently employed for evaluation of a set of data containing discrepant values. On the other hand, Efron presented a non-parametric method the so-called bootstrap that can be used to decrease the dispersion around the central-tendency value. Generally, in radioecology, statistics procedures are used in order to reduce the effect results of the presence of anomalous values as regards averages. In this context, the present study had as an objective to evaluate the application of the non-parametric bootstrap method (BM) for determining the average concentration of {sup 226}Ra in cultivated forage palms (Opuntia spp.) in soils with uranium anomaly on the dairy milk farms, localized in the cities of Pedra and Venturosa, Pernambuco-Brazil, as well as discussing the utilization of this method in radioecology. The results of {sup 226}Ra in samples of forage palm varied from 1,300 to 25,000 mBq.kg{sup -1} (dry matter), with arithmetic average of 5,965.86 +- 5,903.05 mBq.kg{sup -1}. The result obtained for this average using BM was 5,963.82 +- 1,202.96 mBq.kg{sup -1} (dry matter). The use of BM allowed an automatic filtration of experimental data, without the elimination of outliers, leading to the reduction of dispersion around the average. As a result, the BM permitted reaching a stable arithmetic average of the effects of the outliers. (author)
Institute of Scientific and Technical Information of China (English)
Fan-wen Meng; Hui-fu Xu
2006-01-01
In this paper, we propose a Sample Average Approximation (SAA) method for a class of Stochastic Mathematical Programs with Complementarity Constraints (SMPCC) recently SAA estimators. In particular we show that under moderate conditions a sequence of weak stationary points of SAA programs converge to a weak stationary point of the true problem with probability approaching one at exponential rate as the sample size tends to infinity.To implement the SAA method more efficiently, we incorporate the method with some techniques such as Scholtes' regularization method and the well known smoothing NCP method. Some preliminary numerical results are reported.
Directory of Open Access Journals (Sweden)
Yao-Ching Wang
Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.
A FINITE VOLUME ELEMENT METHOD FOR THERMAL CONVECTION PROBLEMS
Institute of Scientific and Technical Information of China (English)
芮洪兴
2004-01-01
Consider the finite volume element method for the thermal convection problem with the infinite Prandtl number. The author uses a conforming piecewise linear function on a fine triangulation for velocity and temperature, and a piecewise constant function on a coarse triangulation for pressure. For general triangulation the optimal order H1 norm error estimates are given.
Different partial volume correction methods lead to different conclusions
DEFF Research Database (Denmark)
Greve, Douglas N; Salat, David H; Bowen, Spencer L
2016-01-01
A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...
Patouillard, Edith; Kleinschmidt, Immo; Hanson, Kara; Pok, Sochea; Palafox, Benjamin; Tougher, Sarah; O'Connell, Kate; Goodman, Catherine
2013-09-05
There is increased interest in using commercial providers for improving access to quality malaria treatment. Understanding their current role is an essential first step, notably in terms of the volume of diagnostics and anti-malarials they sell. Sales volume data can be used to measure the importance of different provider and product types, frequency of parasitological diagnosis and impact of interventions. Several methods for measuring sales volumes are available, yet all have methodological challenges and evidence is lacking on the comparability of different methods. Using sales volume data on anti-malarials and rapid diagnostic tests (RDTs) for malaria collected through provider recall (RC) and retail audits (RA), this study measures the degree of agreement between the two methods at wholesale and retail commercial providers in Cambodia following the Bland-Altman approach. Relative strengths and weaknesses of the methods were also investigated through qualitative research with fieldworkers. A total of 67 wholesalers and 107 retailers were sampled. Wholesale sales volumes were estimated through both methods for 62 anti-malarials and 23 RDTs and retail volumes for 113 anti-malarials and 33 RDTs. At wholesale outlets, RA estimates for anti-malarial sales were on average higher than RC estimates (mean difference of four adult equivalent treatment doses (95% CI 0.6-7.2)), equivalent to 30% of mean sales volumes. For RDTs at wholesalers, the between-method mean difference was not statistically significant (one test, 95% CI -6.0-4.0). At retail outlets, between-method differences for both anti-malarials and RDTs increased with larger volumes being measured, so mean differences were not a meaningful measure of agreement between the methods. Qualitative research revealed that in Cambodia where sales volumes are small, RC had key advantages: providers were perceived to remember more easily their sales volumes and find RC less invasive; fieldworkers found it more
Modeling of composite piezoelectric structures with the finite volume method.
Bolborici, Valentin; Dawson, Francis P; Pugh, Mary C
2012-01-01
Piezoelectric devices, such as piezoelectric traveling- wave rotary ultrasonic motors, have composite piezoelectric structures. A composite piezoelectric structure consists of a combination of two or more bonded materials, at least one of which is a piezoelectric transducer. Piezoelectric structures have mainly been numerically modeled using the finite element method. An alternative approach based on the finite volume method offers the following advantages: 1) the ordinary differential equations resulting from the discretization process can be interpreted directly as corresponding circuits; and 2) phenomena occurring at boundaries can be treated exactly. This paper presents a method for implementing the boundary conditions between the bonded materials in composite piezoelectric structures modeled with the finite volume method. The paper concludes with a modeling example of a unimorph structure.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates.
Yu, Yang; Wei, Wei; Chen, Li-ding; Yang, Lei; Zhang, Han-dan
2015-04-01
Based on 57 years (1957-2013) daily precipitation datasets of the 85 meteorological stations in the Loess Plateau region, different spatial interpolation methods, including ordinary kriging (OK), inverse distance weighting (IDW) and radial-based function (RBF), were conducted to analyze the spatial variation of annual average precipitation regionally. Meanwhile, the mean absolute error (MAE), the root mean square error (RMSE), the accuracy (AC) and the Pearson correlation coefficient (R) were compared among the interpolation results in order to quantify the effects of different interpolation methods on spatial variation of the annual average precipitation. The results showed that the Moran's I index was 0.67 for the 57 years annual average precipitation in the Loess Plateau region. Meteorological stations exhibited strong spatial correlation. The validation results of the 63 training stations and 22 test stations indicated that there were significant correlations between the training and test values among different interpolation methods. However, the RMSE (IDW = 51.49, RBF = 43.79) and MAE (IDW = 38.98, RBF = 34.61) of the IDW and the RBF showed higher values than the OK. In addition, the comparison of the four semi-variagram models (Circular, Spherical, Exponential and Gaussian) for the OK indicated that the circular model had the lowest MAE (32.34) and the highest accuracy (0.976), while the MAE of the exponential model was the highest (33.24). In conclusion, comparing the validation between the training data and test results of the different spatial interpolation methods, the circular model of the OK method was the best one for obtaining accurate spatial interpolation of annual average precipitation in the Loess Plateau region.
The element-based finite volume method applied to petroleum reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Cordazzo, Jonas; Maliska, Clovis R.; Silva, Antonio F.C. da; Hurtado, Fernando S.V. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica
2004-07-01
In this work a numerical model for simulating petroleum reservoirs using the Element-based Finite Volume Method (EbFVM) is presented. The method employs unstructured grids using triangular and/or quadrilateral elements, such that complex reservoir geometries can be easily represented. Due to the control-volume approach, local mass conservation is enforced, permitting a direct physical interpretation of the resulting discrete equations. It is demonstrated that this method can deal with the permeability maps without averaging procedures, since this scheme assumes uniform properties inside elements, instead inside of control volumes, avoiding the need of weighting the permeability values at the control volumes interfaces. Moreover, it is easy to include the full permeability tensor in this method, which is an important issue in simulating heterogeneous and anisotropic reservoirs. Finally, a comparison among the results obtained using the scheme proposed in this work in the EbFVM framework with those obtained employing the scheme commonly used in petroleum reservoir simulation is presented. It is also shown that the scheme proposed is less susceptible to the grid orientation effect with the increasing of the mobility ratio. (author)
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Sub; An, Seok Chan; Ko, Tae Kuk [Yonsei University, Seoul (Korea, Republic of); Chu, Yong [National Fusion Research Institute(NFRI), Daejeon (Korea, Republic of)
2016-09-15
A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils.
Non Destructive Method for Biomass Prediction Combining TLS Derived Tree Volume and Wood Density
Directory of Open Access Journals (Sweden)
Jan Hackenberg
2015-04-01
Full Text Available This paper presents a method for predicting the above ground leafless biomass of trees in a non destructive way. We utilize terrestrial laserscan data to predict the volume of the trees. Combining volume estimates with density measurements leads to biomass predictions. Thirty-six trees of three different species are analyzed: evergreen coniferous Pinus massoniana, evergreen broadleaved Erythrophleum fordii and leafless deciduous Quercus petraea. All scans include a large number of noise points; denoising procedures are presented in detail. Density values are considered to be a minor source of error in the method if applied to stem segments, as comparison to ground truth data reveals that prediction errors for the tree volumes are in accordance with biomass prediction errors. While tree compartments with a diameter larger than 10 cm can be modeled accurately, smaller ones, especially twigs with a diameter smaller than 4 cm, are often largely overestimated. Better prediction results could be achieved by applying a biomass expansion factor to the biomass of compartments with a diameter larger than 10 cm. With this second method the average prediction error for Q. petraea could be reduced from 33.84% overestimation to 3.56%. E. fordii results could also be improved reducing the average prediction error from
Directory of Open Access Journals (Sweden)
Jiraporn Janwised
2014-01-01
Full Text Available We introduce a new technique, a three-level average linear-implicit finite difference method, for solving the Rosenau-Burgers equation. A second-order accuracy on both space and time numerical solution of the Rosenau-Burgers equation is obtained using a five-point stencil. We prove the existence and uniqueness of the numerical solution. Moreover, the convergence and stability of the numerical solution are also shown. The numerical results show that our method improves the accuracy of the solution significantly.
Finite volume method for investigating anisotrooic conductivitv in EEG
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
A novel finite volume method is presented to investigate the effect of anisotropic conductivity on the potential distribution of the scalp. The second-order interpolation of the tetrahedral mesh is used to avoid employing the secondary element. The calculation precision is enhanced by using the Gaussian integration. To avoid the geometric singularity, the triangular prism is employed in place of the conventional hexahedron mesh. With the method, the spherical models as well as the realistic head models are simulated. The calculation results indicate that the anisotropic ratio and the position of dipole sources have great influence on the potential distribution in the electroencephalogram.
Finite Volume Evolution Galerkin Methods for Nonlinear Hyperbolic Systems
Lukáčová-Medvid'ová, M.; Saibertová, J.; Warnecke, G.
2002-12-01
We present new truly multidimensional schemes of higher order within the frame- work of finite volume evolution Galerkin (FVEG) methods for systems of nonlinear hyperbolic conservation laws. These methods couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of the multidimensional hyperbolic system, such that all of the infinitely many directions of wave propagation are taken into account. Following our previous results for the wave equation system, we derive approximate evolution operators for the linearized Euler equations. The integrals along the Mach cone and along the cell interfaces are evaluated exactly, as well as by means of numerical quadratures. The influence of these numerical quadratures will be discussed. Second-order resolution is obtained using a conservative piecewise bilinear recovery and the midpoint rule approximation for time integration. We prove error estimates for the finite volume evolution Galerkin scheme for linear systems with constant coefficients. Several numerical experiments for the nonlinear. Euler equations, which confirm the accuracy and good multidimensional behavior of the FVEG schemes, are presented as well.
Energy Technology Data Exchange (ETDEWEB)
Alexoff, D.L.; Alexoff, D.L.; Dewey, S.L.; Vaska, P.; Krishnamoorthy, S.; Ferrieri, R.; Schueller, M.; Schlyer, D.; Fowler, J.S.
2011-03-01
PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean {+-} S.D.) escaping the leaf parenchyma were measured to be 59 {+-} 1.1%, 64 {+-} 4.4% and 67 {+-} 1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Kurihara, Yosuke; Watanabe, Kajiro; Kobayashi, Kazuyuki; Tanaka, Tanaka
Sleep disorders disturb the recovery from mental and physical fatigues, one of the functions of the sleep. The majority of those who with the disorders are suffering from Sleep Apnea Syndrome (SAS). Continuous Hypoxia during sleep due to SAS cause Circulatory Disturbances, such as hypertension and ischemic heart disease, and Malfunction of Autonomic Nervous System, and other severe complications, often times bringing the suffers to death. In order to prevent these from happening, it is important to detect the SAS in its early stage by monitoring the daily respirations during sleep, and to provide appropriate treatments at medical institutions. In this paper, the Pneumatic Method to detect the Apnea period during sleep is proposed. Pneumatic method can measure heartbeat and respiration signal. Respiration signal can be considered as noise against heartbeat signal, and the decrease in the respiration signal due to Apnea increases the Average Mutual Information of heartbeat. The result of scaling analysis of the average mutual information is defined as threshold to detect the apnea period. The root mean square error between the lengths of Apnea measured by Strain Gauge using for reference and those measured by using the proposed method was 3.1 seconds. And, error of the number of apnea times judged by doctor and proposal method in OSAS patients was 3.3 times.
Energy Technology Data Exchange (ETDEWEB)
Delcourte, S
2007-09-15
We aim to develop a finite volume method which applies to a greater class of meshes than other finite volume methods, restricted by orthogonality constraints. We build discrete differential operators over the three staggered tessellations needed for the construction of the method. These operators verify some analogous properties to those of the continuous operators. At first, the method is applied to the Div-Curl problem, which can be viewed as a building block of the Stokes problem. Then, the Stokes problem is dealt with with various boundary conditions. It is well known that when the computational domain is polygonal and non-convex, the order of convergence of numerical methods is deteriorated. Consequently, we have studied how an appropriate local refinement is able to restore the optimal order of convergence for the Laplacian problem. At last, we have discretized the non-linear Navier-Stokes problem, using the rotational formulation of the convection term, associated to the Bernoulli pressure. With an iterative algorithm, we are led to solve a saddle-point problem at each iteration. We give a particular interest to this linear problem by testing some pre-conditioners issued from finite elements, which we adapt to our method. Each problem is illustrated by numerical results on arbitrary meshes, such as strongly non-conforming meshes. (author)
Directory of Open Access Journals (Sweden)
A. D. Culf
2000-01-01
Full Text Available Three hours of high frequency vertical windspeed and carbon dioxide concentration data recorded over tropical forest in Brazil are presented and discussed in relation to various detrending techniques used in eddy correlation analysis. Running means with time constants 100, 1000 and 1875s and a 30 minute linear detrend, as commonly used to determine fluxes, have been calculated for each case study and are presented. It is shown that, for different trends in the background concentration of carbon dioxide, the different methods can lead to the calculation of radically different fluxes over an hourly period. The examples emphasise the need for caution when interpreting eddy correlation derived fluxes especially for short term process studies. Keywords: Eddy covariance; detrending; running mean; carbon dioxide; tropical forest
Keshvari, Jafar; Heikkilä, Teemu
2011-12-01
Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.
Karpushkin, T. Yu.
2012-12-01
A technique to calculate the burnup of materials of cells and fuel assemblies using the matrices of first-flight neutron collision probabilities rebuilt at a given burnup step is presented. A method to rebuild and correct first collision probability matrices using average chords prior to the first neutron collision, which are calculated with the help of geometric modules of constructed stochastic neutron trajectories, is described. Results of calculation of the infinite multiplication factor for elementary cells with a modified material composition compared to the reference one as well as calculation of material burnup in the cells and fuel assemblies of a VVER-1000 are presented.
Directory of Open Access Journals (Sweden)
Havmöller Rasmus
2007-10-01
Full Text Available Abstract Background The study was designed to investigate the effect of different measuring methodologies on the estimation of P wave duration. The recording length required to ensure reproducibility in unfiltered, signal-averaged P wave analysis was also investigated. An algorithm for automated classification was designed and its reproducibility of manual P wave morphology classification investigated. Methods Twelve-lead ECG recordings (1 kHz sampling frequency, 0.625 μV resolution from 131 healthy subjects were used. Orthogonal leads were derived using the inverse Dower transform. Magnification (100 times, baseline filtering (0.5 Hz high-pass and 50 Hz bandstop filters, signal averaging (10 seconds and bandpass filtering (40–250 Hz were used to investigate the effect of methodology on the estimated P wave duration. Unfiltered, signal averaged P wave analysis was performed to determine the required recording length (6 minutes to 10 s and the reproducibility of the P wave morphology classification procedure. Manual classification was carried out by two experts on two separate occasions each. The performance of the automated classification algorithm was evaluated using the joint decision of the two experts (i.e., the consensus of the two experts. Results The estimate of the P wave duration increased in each step as a result of magnification, baseline filtering and averaging (100 ± 18 vs. 131 ± 12 ms; P μV, 138 ± 13 ms (0.1 μV and 143 ± 18 ms (0.05 μV. (P = 0.01 for all comparisons. The mean errors associated with the P wave morphology parameters were comparable in all segments analysed regardless of recording length (95% limits of agreement within 0 ± 20% (mean ± SD. The results of the 6-min analyses were comparable to those obtained at the other recording lengths (6 min to 10 s. The intra-rater classification reproducibility was 96%, while the interrater reproducibility was 94%. The automated classification algorithm agreed with the
Application of vector finite volume method for electromagnetic flow simulation
Energy Technology Data Exchange (ETDEWEB)
Takata, T.; Murashige, R.; Matsumoto, T.; Yamaguchi, A. [Osaka Univ., Suita, Osaka (Japan)
2011-07-01
A vector finite volume method (VFVM) has been developed for an electromagnetic flow analysis. In the VFVM, the governing equations of magnetic flux density and electric field intensity are solved separately so as to reduce the computational cost caused by an iterative procedure that is required to satisfy the solenoidal condition. In the present paper, a suppression of temperature fluctuation of liquid sodium after a T-junction has also been investigated with a simplified two dimensional numerical analysis by adding an obstacle (turbulence promoter) or a magnetic field after the junction. (author)
New Approach for Error Reduction in the Volume Penalization Method
Iwakami-Nakano, Wakana; Hatakeyama, Nozomu; Hattori, Yuji
2012-01-01
The volume penalization method offers an efficient way to numerically simulate flows around complex-shaped bodies which move and/or deform in general. In this method a penalization term which has permeability eta and a mask function is added to a governing equation as a forcing term in order to impose different dynamics in solid and fluid regions. In this paper we investigate the accuracy of the volume penalization method in detail. We choose the one-dimensional Burgers' equation as a governing equation since it enables us extensive study and it has a nonlinear term similar to the Navier-Stokes equations. It is confirmed that the error which consists of the discretization/truncation error, the penalization error, the round-off error, and others has the same features as those in previous results when we use the standard definition of the mask function. As the number of grid points increases, the error converges to a non-zero constant which is equal to the penalization error. We propose a new approach for reduc...
A volume-based method for denoising on curved surfaces
Biddle, Harry
2013-09-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
Thielens, Arno; Vermeeren, Günter; Joseph, Wout; Martens, Luc
2013-10-01
The organ-specific averaged specific absorption rate (SARosa ) in a heterogeneous human body phantom, the Virtual Family Boy, is determined for the first time in five realistic electromagnetic environments at the Global System for Mobile Communications downlink frequency of 950 MHz. We propose two methods based upon a fixed set of finite-difference time-domain (FDTD) simulations for generating cumulative distribution functions for the SARosa in a certain environment: an accurate vectorial cell-wise spline interpolation with an average error lower than 1.8%, and a faster scalar linear interpolation with a maximal average error of 14.3%. These errors are dependent on the angular steps chosen for the FDTD simulations. However, it is demonstrated that both methods provide the same shape of the cumulative distribution function for the studied organs in the considered environments. The SARosa depends on the considered organ and the environment. Two factors influencing the SARosa are investigated for the first time: conductivity over the density ratio of an organ, and the distance of the organ's center of gravity to the body's surface and exterior of the phantom. A non-linear regression with our model provides a correlation of 0.80. The SARosa due to single plane-wave exposure is also investigated; a worst-case single plane-wave exposure is determined for all studied organs and has been compared with realistic SARosa values. There is no fixed worst-case polarization for all organs, and a single plane-wave exposure condition that exceeds 91% of the SARosa values in a certain environment can always be found for the studied organs. © 2013 Wiley Periodicals, Inc.
Liu, Bin; Tang, Jingshi; Hou, Xiyun; Liu, Lin
2016-07-01
The eccentricity and the inclination of the satellite in geosynchronous orbit are both small, under this condition, perturbations from the Earth's non-spherical gravitational field result in orbit resonances due to incommensurable small denominators, that is, the problem of small eccentricity, small inclination and commensurability small incommensurable denominator exist simultaneously. Usually we adopt the classic Kepler orbital elements to describe an orbit, However, in the case of small eccentricities and small inclinations, the geometric meaning of the perigee and ascending node of an GEO is no longer clear, and the equations of motion have small denominators which results in the failure of the usual mean orbit element perturbation solution. This phenomenon of singularity is caused by the inappropriate choice of independent variables and has nothing to do with the dynamics. Such singularities can be avoided by choosing the appropriate independent variables (called non-singularity orbital elements). Incommensurable singularity appears in the process of solving the perturbation equations by the mean element methodology. The quasi-average element methodology retains the main advantages of the mean element method and reasonably revises its definition. Quasi-average orbits, without short periodic terms, while including the long-term items are taken as the reference orbit. The reference orbit in this transformation has long-term variations which are similar to the long periodic terms within a short-time duration. So we can avoid the failure of the perturbation solution caused by the periodic terms when using the classical perturbation method or the mean element method. From the perspective of mechanics, it can eliminate the incommensurable singularity, and the perturbation solution will remain valid. This paper aims at introducing the calculation method to eliminate the singularity problem of e=0,i=0 and commensurability singularity by using the quasi-average element
Energy Technology Data Exchange (ETDEWEB)
Starkov, A. S. [St. Petersburg National Research University of Information Technologies, Mechanics and Optics, Institute of Refrigeration and Biotechnology (Russian Federation); Starkov, I. A., E-mail: ferroelectrics@ya.ru [Brno University of Technology, SIX Research Centre (Czech Republic)
2014-11-15
It is proposed to use a generalized matrix averaging (GMA) method for calculating the parameters of an effective medium with physical properties equivalent to those of a set of thin multiferroic layers. This approach obviates the need to solve a complex system of magnetoelectroelasticity equations. The required effective characteristics of a system of multiferroic layers are obtained using only operations with matrices, which significantly simplifies calculations and allows multilayer systems to be described. The proposed approach is applicable to thin-layer systems, in which the total thickness is much less than the system length, radius of curvature, and wavelengths of waves that can propagate in the system (long-wave approximation). Using the GMA method, it is also possible to obtain the effective characteristics of a periodic structure with each period comprising a number of thin multiferroic layers.
Teaching Thermal Hydraulics & Numerical Methods: An Introductory Control Volume Primer
Energy Technology Data Exchange (ETDEWEB)
D. S. Lucas
2004-10-01
A graduate level course for Thermal Hydraulics (T/H) was taught through Idaho State University in the spring of 2004. A numerical approach was taken for the content of this course since the students were employed at the Idaho National Laboratory and had been users of T/H codes. The majority of the students had expressed an interest in learning about the Courant Limit, mass error, semi-implicit and implicit numerical integration schemes in the context of a computer code. Since no introductory text was found the author developed notes taught from his own research and courses taught for Westinghouse on the subject. The course started with a primer on control volume methods and the construction of a Homogeneous Equilibrium Model (HEM) (T/H) code. The primer was valuable for giving the students the basics behind such codes and their evolution to more complex codes for Thermal Hydraulics and Computational Fluid Dynamics (CFD). The course covered additional material including the Finite Element Method and non-equilibrium (T/H). The control volume primer and the construction of a three-equation (mass, momentum and energy) HEM code are the subject of this paper . The Fortran version of the code covered in this paper is elementary compared to its descendants. The steam tables used are less accurate than the available commercial version written in C Coupled to a Graphical User Interface (GUI). The Fortran version and input files can be downloaded at www.microfusionlab.com.
The calculation method of mixing volume in a products pipeline
Energy Technology Data Exchange (ETDEWEB)
Gong, Jing; Wang, Qim [China University of Petroleum, Beijing, (China); Wang, Weidongn [Sinopec South China Sales Company, (China); Guo, Yi [CNPC Oil and Gas pipeline control center, (China)
2010-07-01
This paper investigated calculation methods of mixing volume on a pipeline. A method of simulation was developed by combining the Austin-Palfrey empirical formula and field data. The field data were introduced to improve the accuracy of the Austin-Palfrey formula by including other factors such as the terrain, the structure of the pipeline, the characteristics of mixed oil products in pumping stations and the distribution of products along the pipeline. These other factors were collected from field data and analyzed statistically to deduce coefficients. The comparison with field results showed that the formula developed for contamination provided accurate values. The formula achieved more accurate results using the characteristics of the field pipeline. This formula could be used for field application.
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
Directory of Open Access Journals (Sweden)
I. Morino
2010-12-01
Full Text Available Column-averaged volume mixing ratios of carbon dioxide and methane retrieved from the Greenhouse gases Observing SATellite (GOSAT Short-Wavelength InfraRed observation (GOSAT SWIR X_{CO2} and X_{CH4} were compared with the reference data obtained by ground-based high-resolution Fourier Transform Spectrometers (g-b FTSs participating in the Total Carbon Column Observing Network (TCCON.
Through calibrations of g-b FTSs with airborne in-situ measurements, the uncertainty of X_{CO2} and X_{CH4} associated with the g-b FTS was determined to be 0.8 ppm (~0.2% and 4 ppb (~0.2%, respectively. The GOSAT products are validated with these calibrated g-b FTS data. Preliminary results are as follows: The GOSAT SWIR X_{CO2} and X_{CH4} (Version 01.xx are biased low by 8.85 ± 4.75 ppm (2.3 ± 1.2% and 20.4 ± 18.9 ppb (1.2 ± 1.1%, respectively. The precision of the GOSAT SWIR X_{CO2} and X_{CH4} is considered to be about 1%. The latitudinal distributions of zonal means of the GOSAT SWIR X_{CO2} and X_{CH4} show similar features to those of the g-b FTS data.
An implicit δf particle-in-cell method with sub-cycling and orbit averaging for Lorentz ions
Sturdevant, Benjamin J.; Parker, Scott E.; Chen, Yang; Hause, Benjamin B.
2016-07-01
A second order implicit δf Lorentz ion hybrid model with sub-cycling and orbit averaging has been developed to study low-frequency, quasi-neutral plasmas. Models using the full Lorentz force equations of motion for ions may be useful for verifying gyrokinetic ion simulation models in applications where higher order terms may be important. In the presence of a strong external magnetic field, previous Lorentz ion models are limited to simulating very short time scales due to the small time step required for resolving the ion gyromotion. Here, we use a simplified model for ion Landau damped ion acoustic waves in a uniform magnetic field as a test bed for developing efficient time stepping methods to be used with the Lorentz ion hybrid model. A detailed linear analysis of the model is derived to validate simulations and to examine the significance of ion Bernstein waves in the Lorentz ion model. Linear analysis of a gyrokinetic ion model is also performed, and excellent agreement with the dispersion results from the Lorentz ion model is demonstrated for the ion acoustic wave. The sub-cycling/orbit averaging algorithm is shown to produce accurate finite-Larmor-radius effects using large macro-time steps sizes, and numerical damping of high frequency fluctuations can be achieved by formulating the field model in terms of the perturbed flux density. Furthermore, a CPU-GPU implementation of the sub-cycling/orbit averaging is presented and is shown to achieve a significant speedup over an equivalent serial code.
Directory of Open Access Journals (Sweden)
Don-Roger Parkinson
2016-02-01
Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.
Xie, Bin; Xiao, Feng
2016-12-01
We proposed a multi-moment constrained finite volume method which can simulate incompressible flows of high Reynolds number in complex geometries. Following the underlying idea of the volume-average/point-value multi-moment (VPM) method (Xie et al. (2014) [71]), this formulation is developed on arbitrary unstructured hybrid grids by employing the point values (PV) at both cell vertex and barycenter as the prognostic variables. The cell center value is updated via an evolution equation derived from a constraint condition of finite volume form, which ensures the rigorous numerical conservativeness. Novel numerical formulations based on the local PVs over compact stencil are proposed to enhance the accuracy, robustness and efficiency of computations on unstructured meshes of hybrid and arbitrary elements. Numerical experiments demonstrate that the present numerical model has nearly 3-order convergence rate with numerical errors much smaller than the VPM method. The numerical dissipation has been significantly suppressed, which facilitates numerical simulations of high Reynolds number flows in complex geometries.
Accuracy of a new bedside method for estimation of circulating blood volume
DEFF Research Database (Denmark)
Christensen, P; Waever Rasmussen, J; Winther Henneberg, S
1993-01-01
To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume.......To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume....
A finite volume method for fluctuating hydrodynamics of simple fluids
Narayanan, Kiran; Samtaney, Ravi; Moran, Brian
2015-11-01
Fluctuating hydrodynamics accounts for stochastic effects that arise at mesoscopic and macroscopic scales. We present a finite volume method for numerical solutions of the fluctuating compressible Navier Stokes equations. Case studies for simple fluids are demonstrated via the use of two different equations of state (EOS) : a perfect gas EOS, and a Lennard-Jones EOS for liquid argon developed by Johnson et al. (Mol. Phys. 1993). We extend the fourth order conservative finite volume scheme originally developed by McCorquodale and Colella (Comm. in App. Math. & Comput. Sci. 2011), to evaluate the deterministic and stochastic fluxes. The expressions for the cell-centered discretizations of the stochastic shear stress and stochastic heat flux are adopted from Espanol, P (Physica A. 1998), where the discretizations were shown to satisfy the fluctuation-dissipation theorem. A third order Runge-Kutta scheme with weights proposed by Delong et al. (Phy. Rev. E. 2013) is used for the numerical time integration. Accuracy of the proposed scheme will be demonstrated. Comparisons of the numerical solution against theory for a perfect gas as well as liquid argon will be presented. Regularizations of the stochastic fluxes in the limit of zero mesh sizes will be discussed. Supported by KAUST Baseline Research Funds.
Lens array fabrication method with volume expansion property of PDMS
Jang, WonJae; Kim, Junoh; Lee, Muyoung; Lee, Jooho; Bang, Yousung; Won, Yong Hyub
2016-03-01
Conventionally, poly (dimethylsiloxane) lens array is fabricated by replica molding. In this paper, we describe simple method for fabricating lens array with expanding property of PDMS. The PDMS substrate is prepared by spin coating on cleaned glass. After spin coating PDMS, substrate is treated with O2 plasma to promote adhesion between PDMS substrate and photoresist pattern on it. Positive photoresist az-4330 and AZ 430K developer is used for patterning on PDMS. General photolithography process is used to patterning. Then patterned PDMS substrate is dipped to 1- Bromododecane bath. During this process, patterned photoresist work as a barrier and prevent blocked PDMS substrate from reaction with 1-Bromododecane. Unblocked part of PDMS directly react with 1-Bromododecane and results in expanded PDMS volume. The expansion of PDMS is depends on absorbed 1-Bromododecane volume, dipping time and ratio of block to open area. The focal length of lens array is controlled by those PDMS expansion factors. Scale of patterned photoresist determine a diameter of each lens. The expansion occurs symmetrically at center of unblocked PDMS and 1-Bromododecane interface. As a result, the PDMS lens array is achieved by this process.
PERTURBATION FINITE VOLUME METHOD FOR CONVECTIVE-DIFFUSION INTEGRAL EQUATION
Institute of Scientific and Technical Information of China (English)
GAO Zhi; YANG Guowei
2004-01-01
A perturbation finite volume (PFV) method for the convective-diffusion integral equation is developed in this paper. The PFV scheme is an upwind and mixed scheme using any higher-order interpolation and second-order integration approximations, with the least nodes similar to the standard three-point schemes, that is, the number of the nodes needed is equal to unity plus the face-number of the control volume. For instance, in the two-dimensional (2-D) case, only four nodes for the triangle grids and five nodes for the Cartesian grids are utilized, respectively. The PFV scheme is applied on a number of 1-D linear and nonlinear problems, 2-D and 3-D flow model equations. Comparing with other standard three-point schemes, the PFV scheme has much smaller numerical diffusion than the first-order upwind scheme (UDS). Its numerical accuracies are also higher than the second-order central scheme (CDS), the power-law scheme (PLS) and QUICK scheme.
Jia, Hongwei; Zhao, Jun
2016-08-01
The output regulation problem of switched linear multi-agent systems with stabilisable and unstabilisable subsystems is investigated in this paper. A sufficient condition for the solvability of the problem is given. Owing to the characteristics of switched multi-agent systems, even if each agent has its own dwell time, the multi-agent systems, if viewed as an overall switched system, may not have a dwell time. To overcome this difficulty, we present a new approach, called an agent-dependent average dwell time method. Due to the limited information exchange between agents, a distributed dynamic observer network for agents is provided. Further, a distributed dynamic controller based on observer is designed. Finally, simulation results show the effectiveness of the proposed solutions.
Indian Academy of Sciences (India)
V P S Naidu; M R S Reddy
2003-12-01
Frequency domain representation of a short-term heart-rate time series (HRTS) signal is a popular method for evaluating the cardiovascular control system. The spectral parameters, viz. percentage power in low frequency band (%PLF), percentage power in high frequency band (%PHF), power ratio of low frequency to high frequency (PRLH), peak power ratio of low frequency to high frequency (PPRLH) and total power (TP) are extrapolated from the averaged power spectrum of twenty-ﬁve healthy subjects, and 16 acute anterior-wall and nine acute inferior-wall myocardial infarction (MI) patients. It is observed that parasympathetic activity predominates in healthy subjects. From this observation we conclude that during acute myocardial infarction, the anterior wall MI has stimulated sympathetic activity, while the acute inferior wall MI has stimulated parasympathetic activity. Results obtained from ARMA-based analysis of heart-rate time series signals are capable of complementing the clinical examination results.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
A finite volume method for numerical grid generation
Beale, S. B.
1999-07-01
A novel method to generate body-fitted grids based on the direct solution for three scalar functions is derived. The solution for scalar variables , and is obtained with a conventional finite volume method based on a physical space formulation. The grid is adapted or re-zoned to eliminate the residual error between the current solution and the desired solution, by means of an implicit grid-correction procedure. The scalar variables are re-mapped and the process is reiterated until convergence is obtained. Calculations are performed for a variety of problems by assuming combined Dirichlet-Neumann and pure Dirichlet boundary conditions involving the use of transcendental control functions, as well as functions designed to effect grid control automatically on the basis of boundary values. The use of dimensional analysis to build stable exponential functions and other control functions is demonstrated. Automatic procedures are implemented: one based on a finite difference approximation to the Cristoffel terms assuming local-boundary orthogonality, and another designed to procure boundary orthogonality. The performance of the new scheme is shown to be comparable with that of conventional inverse methods when calculations are performed on benchmark problems through the application of point-by-point and whole-field solution schemes. Advantages and disadvantages of the present method are critically appraised. Copyright
Simon, M.; Bobskill, M. R.; Wilhite, A.
2012-11-01
Habitable volume is an important spacecraft design figure of merit necessary to determine the required size of crewed space vehicles, or habitats. In order to design habitats for future missions and properly compare the habitable volumes of future habitat designs with historical spacecraft, consistent methods of both defining the required amount of habitable volume and estimating the habitable volume for a given layout are required. This paper provides a brief summary of historical habitable volume requirements and describes the appropriate application of requirements to various types of missions, particularly highlighting the appropriate application for various gravity environments. Then the proposed "Marching Grid Method", a structured automatic, numerical method to calculate habitable volume for a given habitat design, is described in detail. This method uses a set of geometric Boolean tests applied to a discrete set of points within the pressurized volume to numerically estimate the functionally usable and accessible space that comprises the habitable volume. The application of this method to zero gravity and nonzero gravity environments is also discussed. This proposed method is then demonstrated by calculating habitable volumes using two conceptual-level layouts of habitat designs, one for each type of gravity environment. These include the US Laboratory Module on ISS and the Scenario 12.0 Pressurized Core Module from the recent NASA Lunar Surface Systems studies. Results of this study include a description of the effectiveness of this method for various resolutions of the investigated grid, and commentary highlighting the use of this method to determine the overall utility of interior configurations for automatically evaluating interior layouts.
Fast method for dynamic thresholding in volume holographic memories
Porter, Michael S.; Mitkas, Pericles A.
1998-11-01
It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.
Directory of Open Access Journals (Sweden)
Česenek Jan
2016-01-01
Full Text Available In this article we deal with numerical simulation of the non-stationary compressible turbulent flow. Compressible turbulent flow is described by the Reynolds-Averaged Navier-Stokes (RANS equations. This RANS system is equipped with two-equation k-omega turbulence model. These two systems of equations are solved separately. Discretization of the RANS system is carried out by the space-time discontinuous Galerkin method which is based on piecewise polynomial discontinuous approximation of the sought solution in space and in time. Discretization of the two-equation k-omega turbulence model is carried out by the implicit finite volume method, which is based on piecewise constant approximation of the sought solution. We present some numerical experiments to demonstrate the applicability of the method using own-developed code.
Directory of Open Access Journals (Sweden)
Péter Przemyslaw Ujma
2015-02-01
Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.
A method for determining average beach slope and beach slope variability for U.S. sandy coastlines
Doran, Kara S.; Long, Joseph W.; Overbeck, Jacquelyn R.
2015-01-01
The U.S. Geological Survey (USGS) National Assessment of Hurricane-Induced Coastal Erosion Hazards compares measurements of beach morphology with storm-induced total water levels to produce forecasts of coastal change for storms impacting the Gulf of Mexico and Atlantic coastlines of the United States. The wave-induced water level component (wave setup and swash) is estimated by using modeled offshore wave height and period and measured beach slope (from dune toe to shoreline) through the empirical parameterization of Stockdon and others (2006). Spatial and temporal variability in beach slope leads to corresponding variability in predicted wave setup and swash. For instance, seasonal and storm-induced changes in beach slope can lead to differences on the order of 1 meter (m) in wave-induced water level elevation, making accurate specification of this parameter and its associated uncertainty essential to skillful forecasts of coastal change. A method for calculating spatially and temporally averaged beach slopes is presented here along with a method for determining total uncertainty for each 200-m alongshore section of coastline.
Kudryavtseva, Elena A
2012-01-01
We study the partial case of the planar $N+1$ body problem, $N\\ge2$, of the type of planetary system with satellites. We assume that one of the bodies (the Sun) is much heavier than the other bodies ("planets" and "satellites"), moreover the planets are much heavier than the satellites, and the "years" are much longer than the "months". We prove that, under a nondegeneracy condition which in general holds, there exist at least $2^{N-2}$ smooth 2-parameter families of symmetric periodic solutions in a rotating coordinate system such that the distances between each planet and its satellites are much shorter than the distances between the Sun and the planets. We describe generating symmetric periodic solutions and prove that the nondegeneracy condition is necessary. We give sufficient conditions for some periodic solutions to be orbitally stable in linear approximation. Via the averaging method, the results are extended to a class of Hamiltonian systems with fast and slow variables close to the systems of semi-d...
ACARP Project C10059. ACARP manual of modern coal testing methods. Volume 2: Appendices
Energy Technology Data Exchange (ETDEWEB)
Sakurovs, R.; Creelman, R.; Pohl, J.; Juniper, L. [CSIRO Energy Technology, Sydney, NSW (Australia)
2002-07-01
The Manual summarises the purpose, applicability, and limitations of a range of standard and modern coal testing methods that have potential to assist the coal company technologist to better evaluate coal performance. It is presented in two volumes. This second volume provides more detailed information regarding the methods discussed in Volume 1.
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Finite volume methods for submarine debris flows and generated waves
Kim, Jihwan; Løvholt, Finn; Issler, Dieter
2016-04-01
Submarine landslides can impose great danger to the underwater structures and generate destructive tsunamis. Submarine debris flows often behave like visco-plastic materials, and the Herschel-Bulkley rheological model is known to be appropriate for describing the motion. In this work, we develop numerical schemes for the visco-plastic debris flows using finite volume methods in Eulerian coordinates with two horizontal dimensions. We provide parameter sensitivity analysis and demonstrate how common ad-hoc assumptions such as including a minimum shear layer depth influence the modeling of the landslide dynamics. Hydrodynamic resistance forces, hydroplaning, and remolding are all crucial terms for underwater landslides, and are hence added into the numerical formulation. The landslide deformation is coupled to the water column and simulated in the Clawpack framework. For the propagation of the tsunamis, the shallow water equations and the Boussinesq-type equations are employed to observe how important the wave dispersion is. Finally, two cases in central Norway, i.e. the subaerial quick clay landslide at Byneset in 2012, and the submerged tsunamigenic Statland landslide in 2014, are both presented for validation. The research leading to these results has received funding from the Research Council of Norway under grant number 231252 (Project TsunamiLand) and the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE).
Energy Technology Data Exchange (ETDEWEB)
1986-09-01
This manual provides test procedures that may be used to evaluate those properties of a solid waste that determine whether the waste is a hazardous waste within the definition of Section 3001 of the Resource Conservation and Recovery Act (PL 94-580). These methods are approved for obtaining data to satisfy the requirement of 40 CFR Part 261, Identification and Listing of Hazardous Waste. Volume IA deals with quality control, selection of appropriate test methods, and analytical methods for metallic species. Volume IB consists of methods for organic analytes. Volume IC includes a variety of test methods for miscellaneous analytes and properties for use in evaluating the waste characteristics. Volume II deals with sample acquisition and includes quality control, sampling-plan design and implementation, and field-sampling methods.
Directory of Open Access Journals (Sweden)
Jiaming Liu
2016-01-01
Full Text Available Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables to assess the hydrological impacts of climate change. To improve the simulation accuracy of downscaling methods, the Bayesian Model Averaging (BMA method combined with three statistical downscaling methods, which are support vector machine (SVM, BCC/RCG-Weather Generators (BCC/RCG-WG, and Statistics Downscaling Model (SDSM, is proposed in this study, based on the statistical relationship between the larger scale climate predictors and observed precipitation in upper Hanjiang River Basin (HRB. The statistical analysis of three performance criteria (the Nash-Sutcliffe coefficient of efficiency, the coefficient of correlation, and the relative error shows that the performance of ensemble downscaling method based on BMA for rainfall is better than that of each single statistical downscaling method. Moreover, the performance for the runoff modelled by the SWAT rainfall-runoff model using the downscaled daily rainfall by four methods is also compared, and the ensemble downscaling method has better simulation accuracy. The ensemble downscaling technology based on BMA can provide scientific basis for the study of runoff response to climate change.
Finite volume evolution Galerkin (FVEG) methods for hyperbolic systems
Lukácová-Medvid'ová, Maria; Morton, K.W.; Warnecke, Gerald
2003-01-01
The subject of the paper is the derivation and analysis of new multidimensional, high-resolution, finite volume evolution Galerkin (FVEG) schemes for systems of nonlinear hyperbolic conservation laws. Our approach couples a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of the multidimensional hyperbolic system, such that all of the infinitely many directions of wave propagation are taken into account. In particular, we p...
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival af
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival
Finite volume methods for submarine debris flow with Herschel-Bulkley rheology
Kim, Jihwan; Issler, Dieter
2015-04-01
Submarine landslides can impose great danger to the underwater structures and generate destructive waves. The Herschel-Bulkley rheological model is known to be appropriate for describing the nonlinear viscoplastic behavior of the debris flow. The numerical implementation of the depth-averaged Herschel-Bulkley models such as BING has so-far been limited to the 1-dimensional Lagrangian coordinate system. In this work, we develop numerical schemes with the finite volume methods in the Eulerian coordinates. We provide parameter sensitivity analysis and demonstrate how common ad-hoc assumptions such as including a minimum shear layer depth influence the modeling of the landslide dynamics. The possibility of adding hydrodynamic resistance forces, hydroplaning, and remolding into this Eulerian framework is also discussed. Finally, the possible extension to a two-dimensional operational model for coupling towards operational tsunami models is discussed.
Institute of Scientific and Technical Information of China (English)
LI Li
2006-01-01
By analyzing the theory of over-sampling and averaging, the conclusion is educed that white noise accompanies the signal and the addition of each bit of resolution can be achieved via a fourfold sampling frequency. The addition of each bit will approximately increase the SNR (signal to noise ratio) to 6dB.
Methods for determining enzymatic activity comprising heating and agitation of closed volumes
Energy Technology Data Exchange (ETDEWEB)
Thompson, David Neil; Henriksen, Emily DeCrescenzo; Reed, David William; Jensen, Jill Renee
2016-03-15
Methods for determining thermophilic enzymatic activity include heating a substrate solution in a plurality of closed volumes to a predetermined reaction temperature. Without opening the closed volumes, at least one enzyme is added, substantially simultaneously, to the closed volumes. At the predetermined reaction temperature, the closed volumes are agitated and then the activity of the at least one enzyme is determined. The methods are conducive for characterizing enzymes of high-temperature reactions, with insoluble substrates, with substrates and enzymes that do not readily intermix, and with low volumes of substrate and enzyme. Systems for characterizing the enzymes are also disclosed.
Methods for determining enzymatic activity comprising heating and agitation of closed volumes
Energy Technology Data Exchange (ETDEWEB)
Thompson, David Neil; Henriksen, Emily DeCrescenzo; Reed, David William; Jensen, Jill Renee
2016-03-15
Methods for determining thermophilic enzymatic activity include heating a substrate solution in a plurality of closed volumes to a predetermined reaction temperature. Without opening the closed volumes, at least one enzyme is added, substantially simultaneously, to the closed volumes. At the predetermined reaction temperature, the closed volumes are agitated and then the activity of the at least one enzyme is determined. The methods are conducive for characterizing enzymes of high-temperature reactions, with insoluble substrates, with substrates and enzymes that do not readily intermix, and with low volumes of substrate and enzyme. Systems for characterizing the enzymes are also disclosed.
Investigation of average growth stresses in Cr2O3 scales measured by a novel deflection method
Institute of Scientific and Technical Information of China (English)
钱余海; 李美栓; 刘光明; 辛丽
2002-01-01
The stress in the oxide film plays an important role to keep it intact so it is necessary to determine the stress in the oxide scale. Average growth stresses in Cr2O3 scales formed on Ni-base alloy (Ni80Cr20) at 1000℃ in air were investigated by a novel deflection technique. It is found that the growth stress in the oxide scale is basically compressive and its average order is 100MPa. The stress values are high for the thin scales and become low for thick scales after oxidized for 10h. The planar stress distribution in metals is complex. It is both compressive and tensile at the beginning of oxidation procedure, and then become only tensile during further oxidation.
Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku
2014-03-01
In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.
SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition
Energy Technology Data Exchange (ETDEWEB)
Supanich, MP [Rush University Medical Center, Chicago, IL (United States)
2015-06-15
Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.
Well-balanced finite volume evolution Galerkin methods for the shallow water equations
Medvidová, Maria Lukáčová -; Noelle, Sebastian; Kraft, Marcus
2015-01-01
We present a new well-balanced finite volume method within the framework of the finite volume evolution Galerkin (FVEG) schemes. The methodology will be illustrated for the shallow water equations with source terms modelling the bottom topography and Coriolis forces. Results can be generalized to more complex systems of balance laws. The FVEG methods couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of multidimensio...
Well-balanced finite volume evolution Galerkin methods for the shallow water equations
Lukácová-Medvid'ová, Maria; Kraft, Marcus
2005-01-01
We present a new well-balanced finite volume method within the framework of the finite volume evolution Galerkin (FVEG) schemes. The methodology will be illustrated for the shallow water equations with source terms modelling the bottom topography and Coriolis forces. Results can be generalized to more complex systems of balance laws. The FVEG methods couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of the multidime...
Gongadze, Ekaterina; Iglič, Aleš
2013-03-01
Water ordering near a negatively charged electrode is one of the decisive factors determining the interactions of an electrode with the surrounding electrolyte solution or tissue. In this work, the generalized Langevin-Bikerman model (Gongadze-Iglič model) taking into account the cavity field and the excluded volume principle is used to calculate the space dependency of ions and water number densities in the vicinity of a highly charged surface. It is shown that for high enough surface charged densities the usual trend of increasing counterion number density towards the charged surface may be completely reversed, i.e. the drop in the counterions number density near the charged surface is predicted.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Energy Technology Data Exchange (ETDEWEB)
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Cover, Keith S
2008-01-01
While the multiexponential nature of T2 decays measured in vivo is well known, characterizing T2 decays by a single time constant is still very useful when differentiating among structures and pathologies in MRI images. A novel, robust, fast and very simple method is presented for both estimating and displaying the average time constant for the T2 decay of each pixel from a multiecho MRI sequence. The average time constant is calculated from the average of the values measured from the T2 decay over many echoes. For a monoexponential decay, the normalized decay average varies monotonically with the time constant. Therefore, it is simple to map any normalized decay average to an average time constant. This method takes advantage of the robustness of the normalized decay average to both artifacts and multiexponential decays. Color intensity projections (CIPs) were used to display 32 echoes acquired at a 10ms spacing as a single color image. The brightness of each pixel in each color image was determined by the i...
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-03-24
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, i.e. prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedrons decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds quiet breathing collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R(2)=0.94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R(2)=0.92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
Supplier Portfolio Selection and Optimum Volume Allocation: A Knowledge Based Method
Aziz, Romana; Hillegersberg, van Jos
2010-01-01
Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case stud
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia
2016-11-01
Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.
Energy Technology Data Exchange (ETDEWEB)
Delcey, Mickaël G. [Department of Chemistry – Ångström, The Theoretical Chemistry Programme, Uppsala University, P.O. Box 518, 751 20 Uppsala (Sweden); Pedersen, Thomas Bondo [Centre for Theoretical and Computational Chemistry, Department of Chemistry, University of Oslo, P.O. Box 1033 Blindern, 0315 Oslo (Norway); Aquilante, Francesco [Department of Chemistry – Ångström, The Theoretical Chemistry Programme, Uppsala University, P.O. Box 518, 751 20 Uppsala (Sweden); Dipartimento di chimica “G. Ciamician,” Università di Bologna, V. F. Selmi 2, 40126 Bologna (Italy); Lindh, Roland, E-mail: roland.lindh@kemi.uu.se [Department of Chemistry – Ångström, The Theoretical Chemistry Programme, Uppsala University, P.O. Box 518, 751 20 Uppsala (Sweden); Uppsala Center for Computational Chemistry - UC_3, Uppsala University, P.O. Box 518, 751 20 Uppsala (Sweden)
2015-07-28
An efficient implementation of the state-averaged complete active space self-consistent field (SA-CASSCF) gradients employing density fitting (DF) is presented. The DF allows a reduction both in scaling and prefactors of the different steps involved. The performance of the algorithm is demonstrated on a set of molecules ranging up to an iron-Heme b complex which with its 79 atoms and 811 basis functions is to our knowledge the largest SA-CASSCF gradient computed. For smaller systems where the conventional code could still be used as a reference, both the linear response calculation and the gradient formation showed a clear timing reduction and the overall cost of a geometry optimization is typically reduced by more than one order of magnitude while the accuracy loss is negligible.
Energy Technology Data Exchange (ETDEWEB)
Tawfik, Ahmed M., E-mail: ahm_m_tawfik@hotmail.com [Institut für Diagnostische und Interventionelle Radiologie, Klinikum der J.W.v. Goethe Universität Frankfurt am Main, Theodor-Stern-Kai 7 Frankfurt am Main 60590 (Germany); Diagnostic Radiology Department, Mansoura Faculty of Medicine, 62 Elgomhorya Street, Mansoura 35512 (Egypt); Nour-Eldin, Nour-Eldin A.; Naguib, Nagy N. [Institut für Diagnostische und Interventionelle Radiologie, Klinikum der J.W.v. Goethe Universität Frankfurt am Main, Theodor-Stern-Kai 7 Frankfurt am Main 60590 (Germany); Razek, Ahmed Abdel [Diagnostic Radiology Department, Mansoura Faculty of Medicine, 62 Elgomhorya Street, Mansoura 35512 (Egypt); Denewer, Adel T. [Surgical Oncology Department, Mansoura Oncology Centre, Mansoura Faculty of medicine (Egypt); Bisdas, Sotirios [Department of Neuroradiology, Eberhard Karls University, Tübingen (Germany); Vogl, Thomas J. [Institut für Diagnostische und Interventionelle Radiologie, Klinikum der J.W.v. Goethe Universität Frankfurt am Main, Theodor-Stern-Kai 7 Frankfurt am Main 60590 (Germany)
2012-10-15
Purpose: To evaluate the agreement between quantitative CT perfusion measurements of head and neck squamous cell carcinoma (SCC) obtained from single section with maximal tumor dimension and from average values of multiple sections, and to compare intra- and inter-observer agreement of the two methods. Methods: Perfusion was measured for 28 SCC cases using a region of interest (ROI) inserted in the single dynamic CT section showing maximal tumor dimension, then using average values of multiple ROIs inserted in all tumor-containing sections. Agreement between values of blood flow (BF), blood volume (BV), mean transit time (MTT) and permeability surface area product (PS) calculated by the two methods was assessed. Intra-observer agreement was assessed by comparing repeated calculations done by the same radiologist using both methods after 2 months blinding period. Perfusion measurements were done by another radiologist independently to assess inter-observer agreement of both methods. Results: No significant differences were observed between the means of the 4 perfusion parameters calculated by both methods, all p values >0.05 The 95% limits of agreement between the two methods were (−33.9 to 43) ml/min/100 g for BF, (−2.5 to 2.8) ml/100 g for BV, (−4.9 to 3.9) s for MTT and (−17.5 to 18.6) ml/min/100 g for PS. Narrower limits of agreement were obtained using average of multiple sections than with single section denoting improved intra- and inter-observer agreement. Conclusion: Agreement between both methods is acceptable. Taking the average of multiple sections slightly improves intra- and inter-observer agreement.
On Third-Order Limiter Functions for Finite Volume Methods
Schmidtmann, Birte; Torrilhon, Manuel
2014-01-01
In this article, we propose a finite volume limiter function for a reconstruction on the three-point stencil. Compared to classical limiter functions in the MUSCL framework, which yield $2^{\\text{nd}}$-order accuracy, the new limiter is $3^\\text{rd}$-order accurate for smooth solutions. In an earlier work, such a $3^\\text{rd}$-order limiter function was proposed and showed successful results [2]. However, it came with unspecified parameters. We close this gap by giving information on these parameters.
Generalized Navier Boundary Condition for a Volume Of Fluid approach using a Finite-Volume method
Boelens, A M P
2016-01-01
In this work, an analytical Volume Of Fluid (VOF) implementation of the Generalized Navier Boundary Condition is presented based on the Brackbill surface tension model. The model is validated by simulations of droplets on a smooth surface in a planar geometry. Looking at the static behavior of the droplets, it is found that there is a good match between the droplet shape resolved in the simulations and the theoretically predicted shape for various values of the Young's angle. Evaluating the spreading of a droplet on a completely wetting surface, the Voinov-Tanner-Cox law ($\\theta \\propto \\text{Ca}^{1/3}$) can be observed. At later times scaling follows $r \\propto t^{1/2}$, suggesting spreading is limited by inertia. These observations are made without any fitting parameters except the slip length.
Adjoint complement to viscous finite-volume pressure-correction methods
Stück, Arthur; Rung, Thomas
2013-09-01
A hybrid-adjoint Navier-Stokes method for the pressure-based computation of hydrodynamic objective functional derivatives with respect to the shape is systematically derived in three steps: The underlying adjoint partial differential equations and boundary conditions for the frozen-turbulence Reynolds-averaged Navier-Stokes equations are considered in the first step. In step two, the adjoint discretisation is developed from the primal, unstructured finite-volume discretisation, such that adjoint-consistent approximations to the adjoint partial differential equations are obtained following a so-called hybrid-adjoint approach. A unified, discrete boundary description is outlined that supports high- and low-Reynolds number turbulent wall-boundary treatments for both the adjoint boundary condition and the boundary-based gradient formula. The third component focused in the development of the industrial adjoint CFD method is the adjoint counterpart to the primal pressure-correction algorithm. The approach is verified against the direct-differentiation method and an application to internal flow problems is presented.
Liu, Xinhong; Gao, Yan; Wang, Honglian; Guo, Junyao; Yan, Shaohua
2015-01-01
The emission of N2 is important to remove excess N from lakes, ponds, and wetlands. To investigate the gas emission from water, Gao et al. (2013) developed a new method using a bubble trap device to collect gas samples from waters. However, the determination accuracy of sampling volume and gas component concentration was still debatable. In this study, the method was optimized for in situ sampling, accurate volume measurement and direct injection to a gas chromatograph for the analysis of N2 and other gases. By the optimized new method, the recovery rate for N2 was 100.28% on average; the mean coefficient of determination (R(2)) was 0.9997; the limit of detection was 0.02%. We further assessed the effects of the new method, bottle full of water, vs. vacuum bag and vacuum vial methods, on variations of N2 concentration as influenced by sample storage times of 1, 2, 3, 5, and 7 days at constant temperature of 15°C, using indices of averaged relative peak area (%) in comparison with the averaged relative peak area of each method at 0 day. The indices of the bottle full of water method were the lowest (99.5%-108.5%) compared to the indices of vacuum bag and vacuum vial methods (119%-217%). Meanwhile, the gas chromatograph determination of other gas components (O2, CH4, and N2O) was also accurate. The new method was an alternative way to investigate N2 released from various kinds of aquatic ecosystems.
Institute of Scientific and Technical Information of China (English)
Xinhong Liu; Yan Gao; Honglian Wang; Junyao Guo; Shaohua Yan
2015-01-01
The emission of N2 is important to remove excess N from lakes,ponds,and wetlands.To investigate the gas emission from water,Gao et al.(2013) developed a new method using a bubble trap device to collect gas samples from waters.However,the determination accuracy of sampling volume and gas component concentration was still debatable.In this study,the method was optimized for in situ sampling,accurate volume measurement and direct injection to a gas chromatograph for the analysis of N2 and other gases.By the optimized new method,the recovery rate for N2 was 100.28％ on average; the mean coefficient of determination (R2) was 0.9997; the limit of detection was 0.02％.We further assessed the effects of the new method,bottle full of water,vs.vacuum bag and vacuum vial methods,on variations of N2 concentration as influenced by sample storage times of 1,2,3,5,and 7 days at constant temperature of 15℃,using indices of averaged relative peak area (％) in comparison with the averaged relative peak area of each method at 0 day.The indices of the bottle full of water method were the lowest (99.5％--108.5％) compared to the indices of vacuum bag and vacuum vial methods (119％-217％).Meanwhile,the gas chromatograph determination of other gas components (O2,CH4,and N2O) was also accurate.The new method was an alternative way to investigate N2 released from various kinds of aquatic ecosystems.
Institute of Scientific and Technical Information of China (English)
Chen Long; Cai Lixun; Yao Di
2013-01-01
By introducing a fatigue blunting factor,the cyclic elasto-plastic Hutchinson-RiceRosengren (HRR) field near the crack tip under the cyclic loading is modified.And,an average damage per loading-cycle in the cyclic plastic deformation region is defined due to Manson-Coffin law.Then,according to the linear damage accumulation theory-Miner law,a new model for predicting the fatigue crack growth (FCG) of the opening mode crack based on the low cycle fatigue (LCF) damage is set up.The step length of crack propagation is assumed to be the size of cyclic plastic zone.It is clear that every parameter of the new model has clearly physical meaning which does not need any human debugging.Based on the LCF test data,the FCG predictions given by the new model are consistent with the FCG test results of Cr2Ni2MoV and X12CrMoWVNbN 10-1-1.What's more,referring to the relative researches,the good predictability of the new model is also proved on six kinds of materials.
Protection Parameters against the Cracks by the Method of Volume Compensation Dam
Directory of Open Access Journals (Sweden)
Bulatov Georgiy
2016-01-01
Full Text Available This article provides estimates the parameters of protection from cracking dam due to volume compensation method. This article discusses the method of compensation dam volume. This method allows calculating the settings of security causing cracks the dam. Presents graphs of horizontal deformations of elongation calculated surface along the length of the construction and in time. Showing horizontal stress distribution diagram in the ground around the pile in plan and in section. Given all the necessary formulas for the method of compensation of the dam volume.
Damanik, David
2008-01-01
We develop further the approach to upper and lower bounds in quantum dynamics via complex analysis methods which was introduced by us in a sequence of earlier papers. Here we derive upper bounds for non-time averaged outside probabilities and moments of the position operator from lower bounds for transfer matrices at complex energies. Moreover, for the time-averaged transport exponents, we present improved lower bounds in the special case of the Fibonacci Hamiltonian. These bounds lead to an optimal description of the time-averaged spreading rate of the fast part of the wavepacket in the large coupling limit. This provides the first example which demonstrates that the time-averaged spreading rates may exceed the upper box-counting dimension of the spectrum.
Patouillard, Edith; Kleinschmidt, Immo; Hanson, Kara; Pok, Sochea; Palafox, Benjamin; Tougher, Sarah; O’Connell, Kate; Goodman, Catherine
2013-01-01
Background There is increased interest in using commercial providers for improving access to quality malaria treatment. Understanding their current role is an essential first step, notably in terms of the volume of diagnostics and anti-malarials they sell. Sales volume data can be used to measure the importance of different provider and product types, frequency of parasitological diagnosis and impact of interventions. Several methods for measuring sales volumes are available, yet all have met...
Patouillard, E; Kleinschmidt, I.; Hanson, K.; Pok, S; Palafox, B; Tougher, S; O Connell, K.; Goodman, C.
2013-01-01
BACKGROUND There is increased interest in using commercial providers for improving access to quality malaria treatment. Understanding their current role is an essential first step, notably in terms of the volume of diagnostics and anti-malarials they sell. Sales volume data can be used to measure the importance of different provider and product types, frequency of parasitological diagnosis and impact of interventions. Several methods for measuring sales volumes are available, yet all have met...
High-speed volume measurement system and method
Lane, Michael H.; Doyle, Jr., James L.; Brinkman, Michael J.
2015-11-24
Disclosed is a volume sensor having first, second, and third laser sources emitting first, second, and third laser beams; first, second, and third beam splitters splitting the first, second, and third laser beams into first, second, and third beam pairs; first, second, and third optical assemblies expanding the first, second, and third beam pairs into first, second, and third pairs of parallel beam sheets; fourth, fifth, and sixth optical assemblies focusing the first, second, and third beam sheet pairs into fourth, fifth, and sixth beam pairs; and first, second, and third detector pairs receiving the fourth, fifth, and sixth beam pairs and converting a change in intensity of at least one of the beam pairs resulting from an object passing through at least one of the first, second, and third parallel beam sheets into at least one electrical signal proportional to a three-dimensional representation of the object.
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Control for the Three-Phase Four-Wire Four-Leg APF Based on SVPWM and Average Current Method
Directory of Open Access Journals (Sweden)
Xiangshun Li
2015-01-01
Full Text Available A novel control method is proposed for the three-phase four-wire four-leg active power filter (APF to realize the accurate and real-time compensation of harmonic of power system, which combines space vector pulse width modulation (SVPWM with triangle modulation strategy. Firstly, the basic principle of the APF is briefly described. Then the harmonic and reactive currents are derived by the instantaneous reactive power theory. Finally simulation and experiment are built to verify the validity and effectiveness of the proposed method. The simulation results show that the response time for compensation is about 0.025 sec and the total harmonic distortion (THD of the source current of phase A is reduced from 33.38% before compensation to 3.05% with APF.
Energy Technology Data Exchange (ETDEWEB)
Tomimoto, Shigehiro; Nakatani, Satoshi; Tanaka, Norio; Uematsu, Masaaki; Beppu, Shintaro; Nagata, Seiki; Hamada, Seiki; Takamiya, Makoto; Miyatake, Kunio [National Cardiovascular Center, Suita, Osaka (Japan)
1995-01-01
Acoustic quantification (AQ: the real-time automated boundary detection system) allows instantaneous measurement of cardiac chamber volumes. The feasibility of this method was evaluated by comparing the left ventricular (LV) volumes obtained with AQ to those derived from ultrafast computed tomography (UFCT), which enables accurate measurements of LV volumes even in the presence of LV asynergy, in 23 patients (8 with ischemic heart disease, 5 with cardiomyopathy, 3 with valvular heart disease). Both LV end-diastolic and end-systolic volumes obtained with the AQ method were in good agreement with those obtained with UFCT (y=1.04{chi}-16.9, r=0.95; y=0.87{chi}+15.7, r=0.91; respectively). AQ was reliable even in the presence of LV asynergy. Interobserver variability for the AQ measurement was 10.2%. AQ provides a new, clinically useful method for real-time accurate estimation of the left ventricular volume. (author).
Experimental Validation of Volume of Fluid Method for a Sluice Gate Flow
Directory of Open Access Journals (Sweden)
A. A. Oner
2012-01-01
Full Text Available Laboratory experiments are conducted for 2D turbulent free surface flow which interacts with a vertical sluice gate. The velocity field, on the centerline of the channel flow upstream of the gate is measured using the particle image velocimetry technique. The numerical simulation of the same flow is carried out by solving the governing equations, Reynolds-averaged continuity and Navier-Stokes equations, using finite element method. In the numerical solution of the governing equations, the standard k-ε turbulence closure model is used to define the turbulent viscosity. The measured horizontal velocity distribution at the inflow boundary of the solution domain is taken as the boundary condition. The volume of fluid (VOF method is used to determine the flow profile in the channel. Taking into account of the flow characteristics, the computational domain is divided into five subdomains, each having different mesh densities. Three different meshes with five subdomains are employed for the numerical model. A grid convergence analysis indicates that the discretization error in the predicted velocities on the fine mesh remains within 2%. The computational results are compared with the experimental data, and, the most suitable mesh in predicting the velocity field and the flow profile among the three meshes is selected.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this paper, we study the semi-discrete mortar upwind finite volume element method with the Crouzeix-Raviart element for the parabolic convection diffusion problems.It is proved that the semi-discrete mortar upwind finite volume element approximations derived are convergent in the H1- and L2-norms.
Critical length sampling: a method to estimate the volume of downed coarse woody debris
G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey
2010-01-01
In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...
Variation in Measurements of Transtibial Stump Model Volume A Comparison of Five Methods
Bolt, A.; de Boer-Wilzing, V. G.; Geertzen, J. H. B.; Emmelot, C. H.; Baars, E. C. T.; Dijkstra, P. U.
2010-01-01
Objective: To determine the right moment for fitting the first prosthesis, it is necessary to know when the volume of the stump has stabilized. The aim of this study is to analyze variation in measurements of transtibial stump model volumes using the water immersion method, the Design TT system, the
Lobmaier, Silvia M; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A; Shaw, Caroline J; Müller, Alexander; Ortiz, Javier U; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C; Schneider, Karl T M
2016-11-01
Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after myocardial infarction in adult cardiology. Application of this method to fetal medicine has established significantly better identification than with short-term variation by computerized cardiotocography of growth-restricted fetuses. The aim of this study was to determine the longitudinal progression of phase-rectified signal averaging indices in severely growth-restricted human fetuses and the prognostic accuracy of the technique in relation to perinatal and neurologic outcome. Raw data from cardiotocography monitoring of 279 human fetuses were obtained from 8 centers that took part in the multicenter European "TRUFFLE" trial on optimal timing of delivery in fetal growth restriction. Average acceleration and deceleration capacities were calculated by phase-rectified signal averaging to establish progression from 5 days to 1 day before delivery and were compared with short-term variation progression. The receiver operating characteristic curves of average acceleration and deceleration capacities and short-term variation were calculated and compared between techniques for short- and intermediate-term outcome. Average acceleration and deceleration capacities and short-term variation showed a progressive decrease in their diagnostic indices of fetal health from the first examination 5 days before delivery to 1 day before delivery. However, this decrease was significant 3 days before delivery for average acceleration and deceleration capacities, but 2 days before delivery for short-term variation. Compared with analysis of changes in short-term variation, analysis of (delta) average acceleration and deceleration capacities better predicted values of Apgar scores <7 and antenatal
Urban Run-off Volumes Dependency on Rainfall Measurement Method
DEFF Research Database (Denmark)
Pedersen, L.; Jensen, N. E.; Rasmussen, Michael R.;
2005-01-01
Urban run-off is characterized with fast response since the large surface run-off in the catchments responds immediately to variations in the rainfall. Modeling such type of catchments is most often done with the input from very few rain gauges, but the large variation in rainfall over small area...... resolutions and single gauge rainfall was fed to a MOUSE run-off model. The flow and total volume over the event is evaluated.......Urban run-off is characterized with fast response since the large surface run-off in the catchments responds immediately to variations in the rainfall. Modeling such type of catchments is most often done with the input from very few rain gauges, but the large variation in rainfall over small areas...... suggests that rainfall needs to be measured with a much higher spatial resolution (Jensen and Pedersen, 2004). This paper evaluates the impact of using high-resolution rainfall information from weather radar compared to the conventional single gauge approach. The radar rainfall in three different...
Development of production methods of volume source by the resinous solution which has hardening
Motoki, R
2002-01-01
Volume sources is used for standard sources by radioactive measurement using Ge semiconductor detector of environmental sample, e.g. water, soil and etc. that require large volume. The commercial volume source used in measurement of the water sample is made of agar-agar, and that used in measurement of the soil sample is made of alumina powder. When the plastic receptacles of this two kinds of volume sources were damaged, the leakage contents cause contamination. Moreover, if hermetically sealing performance of volume source made of agar-agar fell, volume decrease due to an evaporation off moisture gives an error to radioactive measurement. Therefore, we developed the two type methods using unsaturated polyester resin, vinilester resin, their hardening agent and acrylicresin. The first type is due to dispersing the hydrochloric acid solution included the radioisotopes uniformly in each resin and hardening the resin. The second is due to dispersing the alumina powder absorbed the radioisotopes in each resin an...
Granström, Sara; Pipper, Christian Bressen; Møgelvang, Rasmus; Sogaard, Peter; Willesen, Jakob Lundgren; Koch, Jørgen
2012-12-01
The aims of this study were to compare the effect of sample volume (SV) size settings and sampling method on measurement variability and peak systolic (s'), and early (e') and late (a') diastolic longitudinal myocardial velocities using color tissue Doppler imaging (cTDI) in cats. Twenty cats with normal echocardiograms and 20 cats with hypertrophic cardiomyopathy. We quantified and compared empirical variance and average absolute values of s', e' and a' for three cardiac cycles using eight different SV settings (length 1,2,3 and 5 mm; width 1 and 2 mm) and three methods of sampling (end-diastolic sampling with manual tracking of the SV, end-systolic sampling without tracking, and random-frame sampling without tracking). No significant difference in empirical variance could be demonstrated between most of the tested SVs. However, the two settings with a length of 1 mm resulted in a significantly higher variance compared with all settings where the SV length exceeded 2 mm (p sampling method on the variability of measurements (p = 0.003) and manual tracking obtained the lowest variance. No difference in average values of s', e' or a' could be found between any of the SV settings or sampling methods. Within the tested range of SV settings, an SV length of 1 mm resulted in higher measurement variability compared with an SV length of 3 and 5 mm, and should therefore be avoided. Manual tracking of the sample volume is recommended. Copyright © 2012 Elsevier B.V. All rights reserved.
Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows
Raman, Venkatramanan
A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.
Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism
Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.
2016-01-01
Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the
RGB imaging volumes alignment method for color holographic displays
Zaperty, Weronika; Kozacki, Tomasz; Gierwiało, Radosław; Kujawińska, Małgorzata
2016-09-01
Recent advances in holographic displays include increased interest in multiplexing techniques, which allow for extension of viewing angle, hologram resolution increase, or color imaging. In each of these situations, the image is obtained by a composition of a several light wavefronts and therefore some wavefront misalignment occurs. In this work we present a calibration method, that allows for correction of these misalignments by a suitable numerical manipulation of holographic data. For this purpose, we have developed an automated procedure that is based on a measurement of positions of reconstructed synthetic hologram of a target object with focus at two different reconstruction distances. In view of relatively long reconstruction distances in holographic displays, we focus on angular deviations of light beams, which result in a noticeable mutual lateral shift and inclination of the component images in space. A method proposed in this work is implemented in a color holographic display unit (single Spatial Light Modulator - SLM) utilizing Space- Division Method (SDM). In this technique, also referred as Aperture Field Division (AFD) method, a significant wavefront inclination is introduced by a color filter glass mosaic plate (mask) placed in front of the SLM. It is verified that an accuracy of the calibration method, obtained for reconstruction distance 700mm, is 34.5 μm and 0.02°, for the lateral shift and for the angular compensation, respectively. In the final experiment the presented method is verified through real-world object color image reconstruction.
Cortés-Giraldo, M A; Carabe, A
2015-04-07
We compare unrestricted dose average linear energy transfer (LET) maps calculated with three different Monte Carlo scoring methods in voxelized geometries irradiated with proton therapy beams with three different Monte Carlo scoring methods. Simulations were done with the Geant4 (Geometry ANd Tracking) toolkit. The first method corresponds to a step-by-step computation of LET which has been reported previously in the literature. We found that this scoring strategy is influenced by spurious high LET components, which relative contribution in the dose average LET calculations significantly increases as the voxel size becomes smaller. Dose average LET values calculated for primary protons in water with voxel size of 0.2 mm were a factor ~1.8 higher than those obtained with a size of 2.0 mm at the plateau region for a 160 MeV beam. Such high LET components are a consequence of proton steps in which the condensed-history algorithm determines an energy transfer to an electron of the material close to the maximum value, while the step length remains limited due to voxel boundary crossing. Two alternative methods were derived to overcome this problem. The second scores LET along the entire path described by each proton within the voxel. The third followed the same approach of the first method, but the LET was evaluated at each step from stopping power tables according to the proton kinetic energy value. We carried out microdosimetry calculations with the aim of deriving reference dose average LET values from microdosimetric quantities. Significant differences between the methods were reported either with pristine or spread-out Bragg peaks (SOBPs). The first method reported values systematically higher than the other two at depths proximal to SOBP by about 15% for a 5.9 cm wide SOBP and about 30% for a 11.0 cm one. At distal SOBP, the second method gave values about 15% lower than the others. Overall, we found that the third method gave the most consistent
Heuvel, Willem Van den; Soncini, Alessandro
2015-01-01
We present an ab initio methodology dedicated to the determination of the electronic structure and magnetic properties of ground and low-lying excited states, i.e., the crystal field levels, in lanthanide(III) complexes. Currently, the most popular and successful ab initio approach is the CASSCF/RASSI-SO method, consisting of the optimization of multiple complete active space self-consistent field (CASSCF) spin eigenfunctions, followed by full diagonalization of the spin--orbit coupling (SOC) Hamiltonian in the basis of the CASSCF spin states featuring spin-dependent orbitals. Based on two simple observations valid for Ln(III) complexes, namely: (i) CASSCF 4f atomic orbitals are expected to change very little when optimized for different multiconfigurational states belonging to the 4f-electronic configuration, (ii) due to strong SOC the total spin is not a good quantum number, we propose here an efficient ab initio strategy which completely avoids any multiconfigurational calculation, by optimizing a unique s...
3D photography is a reliable method of measuring infantile haemangioma volume over time.
Robertson, Sarah A; Kimble, Roy M; Storey, Kristen J; Gee Kee, Emma L; Stockton, Kellie A
2016-09-01
Infantile haemangiomas are common lesions of infancy. With the development of novel treatments utilised to accelerate their regression, there is a need for a method of assessing these lesions over time. Volume is an ideal assessment method because of its quantifiable nature. This study investigated whether 3D photography is a valid tool for measuring the volume of infantile haemangiomas over time. Thirteen children with infantile haemangiomas presenting to the Vascular Anomalies Clinic, Royal Children's Hospital/Lady Cilento Children's Hospital treated with propranolol were included in the study. Lesion volume was assessed using 3D photography at presentation, one month and three months follow up. Intrarater reliability was determined by retracing all images several months after the initial mapping. Interrater reliability of the 3D camera software was determined by two investigators, blinded to each other's results, independently assessing infantile haemangioma volume. Lesion volume decreased significantly between presentation and three-month follow-up (p<0.001). Volume intra- and interrater reliability were excellent with ICC 0.991 (95% CI 0.982, 0.995) and 0.978 (95% CI 0.955, 0.989), respectively. This study demonstrates images taken with the 3D LifeViz™ camera and lesion volume calculated with Dermapix® software is a reliable method for assessing infantile haemangioma volume over time. Copyright © 2016 Elsevier Inc. All rights reserved.
A finite volume method for cylindrical heat conduction problems based on local analytical solution
Li, Wang
2012-10-01
A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.
Stefanova, D
2000-01-01
Short (up to 60 s) supramaximal (about 400 W on the average) exercise is accompanied by specific biochemical processes in the working muscles and by a general increase in energy metabolism. Outwardly, this is manifested by an excess post-exercise oxygen consumption (EPOC). Since its actual measurement is time consuming and associated sometimes with difficulties, we propose a fixed 3-min test for EPOC prediction. The measured volumes of oxygen consumption are related to the corresponding periods in a coordinate system as reciprocal values. The linear equation, whose parameters were calculated by the method of least squares or were determined graphically, provided for prediction of the EPOC volume with satisfactory accuracy and precision. The obtained increase of the predicted values over the actually measured values was below 5%, and the correlation coefficient r = 0.98. Other parameters of the recovery process were also calculated, such as tau (half-time) of EPOC and the rate constant k.
Segmentation of MRI Volume Data Based on Clustering Method
Directory of Open Access Journals (Sweden)
Ji Dongsheng
2016-01-01
Full Text Available Here we analyze the difficulties of segmentation without tag line of left ventricle MR images, and propose an algorithm for automatic segmentation of left ventricle (LV internal and external profiles. Herein, we propose an Incomplete K-means and Category Optimization (IKCO method. Initially, using Hough transformation to automatically locate initial contour of the LV, the algorithm uses a simple approach to complete data subsampling and initial center determination. Next, according to the clustering rules, the proposed algorithm finishes MR image segmentation. Finally, the algorithm uses a category optimization method to improve segmentation results. Experiments show that the algorithm provides good segmentation results.
Cavezzi, A; Schingale, F; Elio, C
2010-10-01
Accurate measurement of limb volume is considered crucial to lymphedema management. Various non-invasive methods may be used and have been validated in recent years, though suboptimal standardisation has been highlighted in different publications.
1979-05-25
This volume presents (1) Methods for computer and hand analysis of numerical language performance data (includes examples) (2) samples of interview, observation, and survey instruments used in collecting language data. (Author)
CASCADIC MULTIGRID FOR FINITE VOLUME METHODS FOR ELLIPTIC PROBLEMS
Institute of Scientific and Technical Information of China (English)
Zhong-ci Shi; Xue-jun Xu; Hong-ying Man
2004-01-01
In this paper, some effective cascadic multigrid methods are proposed for solving the large scale symmetric or nonsymmetric algebraic systems arising from the finite volumemethods for second order elliptic problems. Its is shown that these algorithms are optimal in both accuracy and computational complexity. Numerical expermients are repored to support out theory.
Souza-Junior, Eduardo José; de Souza-Régis, Marcos Ribeiro; Alonso, Roberta Caroline Bruschi; de Freitas, Anderson Pinheiro; Sinhoreti, Mario Alexandre Coelho; Cunha, Leonardo Gonçalves
2011-01-01
The aim of the present study was to evaluate the influence of curing methods and composite volumes on the marginal and internal adaptation of composite restoratives. Two cavities with different volumes (Lower volume: 12.6 mm(3); Higher volume: 24.5 mm(3)) were prepared on the buccal surface of 60 bovine teeth and restored using Filtek Z250 in bulk filling. For each cavity, specimens were randomly assigned into three groups according to the curing method (n=10): 1) continuous light (CL: 27 seconds at 600 mW/cm(2)); 2) soft-start (SS: 10 seconds at 150 mW/cm(2)+24 seconds at 600 mW/cm(2)); and 3) pulse delay (PD: five seconds at 150 mW/cm(2)+three minutes with no light+25 seconds at 600 mW/cm(2)). The radiant exposure for all groups was 16 J/cm(2). Marginal adaptation was measured with the dye staining gap procedure, using Caries Detector. Outer margins were stained for five seconds and the gap percentage was determined using digital images on a computer measurement program (Image Tool). Then, specimens were sectioned in slices and stained for five seconds, and the internal gaps were measured using the same method. Data were submitted to two-way analysis of variance and Tukey test (pcuring method. For CL groups, restorations with higher volume showed higher marginal gap incidence than did the lower volume restorations. Additionally, the effect of the curing method depended on the volume. Regarding marginal adaptation, SS resulted in a significant reduction of gap formation, when compared to CL, for higher volume restorations. For lower volume restorations, there was no difference among the curing methods. For internal adaptation, the modulated curing methods SS and PD promoted a significant reduction of gap formation, when compared to CL, only for the lower volume restoration. Therefore, in similar conditions of the cavity configuration, the higher the volume of composite, the greater the gap formation. In addition, modulated curing methods (SS and PD) can improve
Finite volume element method for analysis of unsteady reaction-diffusion problems
Institute of Scientific and Technical Information of China (English)
Sutthisak Phongthanapanich; Pramote Dechaumphai
2009-01-01
A finite volume element method is developed for analyzing unsteady scalar reaction--diffusion problems in two dimensions. The method combines the concepts that are employed in the finite volume and the finite element method together. The finite volume method is used to discretize the unsteady reaction--diffusion equation, while the finite element method is applied to estimate the gradient quantities at cell faces. Robustness and efficiency of the combined method have been evaluated on uniform rectangular grids by using available numerical solutions of the two-dimensional reaction-diffusion problems. The numerical solutions demonstrate that the combined method is stable and can provide accurate solution without spurious oscillation along the highgradient boundary layers.
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
Two-Level Stabilized Finite Volume Methods for Stationary Navier-Stokes Equations
Directory of Open Access Journals (Sweden)
Anas Rachid
2012-01-01
Full Text Available We propose two algorithms of two-level methods for resolving the nonlinearity in the stabilized finite volume approximation of the Navier-Stokes equations describing the equilibrium flow of a viscous, incompressible fluid. A macroelement condition is introduced for constructing the local stabilized finite volume element formulation. Moreover the two-level methods consist of solving a small nonlinear system on the coarse mesh and then solving a linear system on the fine mesh. The error analysis shows that the two-level stabilized finite volume element method provides an approximate solution with the convergence rate of the same order as the usual stabilized finite volume element solution solving the Navier-Stokes equations on a fine mesh for a related choice of mesh widths.
Quantification and variability in colonic volume with a novel magnetic resonance imaging method
DEFF Research Database (Denmark)
Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke;
2015-01-01
-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... observations were detected for any segments (All p > 0.05). Inter-individual variability varied across segments from low correlation in cecum/ascending colon (intra-class correlation coefficient [ICC] = 0.44) to moderate correlation in the descending colon (ICC = 0.61) and high correlation in the transverse...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...
Caltrans Average Annual Daily Traffic Volumes (2004)
California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...
Volume estimation of the thalamus using freesurfer and stereology: consistency between methods.
Keller, Simon S; Gerdes, Jan S; Mohammadi, Siawoosh; Kellinghaus, Christoph; Kugel, Harald; Deppe, Katja; Ringelstein, E Bernd; Evers, Stefan; Schwindt, Wolfram; Deppe, Michael
2012-10-01
Freely available automated MR image analysis techniques are being increasingly used to investigate neuroanatomical abnormalities in patients with neurological disorders. It is important to assess the specificity and validity of automated measurements of structure volumes with respect to reliable manual methods that rely on human anatomical expertise. The thalamus is widely investigated in many neurological and neuropsychiatric disorders using MRI, but thalamic volumes are notoriously difficult to quantify given the poor between-tissue contrast at the thalamic gray-white matter interface. In the present study we investigated the reliability of automatically determined thalamic volume measurements obtained using FreeSurfer software with respect to a manual stereological technique on 3D T1-weighted MR images obtained from a 3 T MR system. Further to demonstrating impressive consistency between stereological and FreeSurfer volume estimates of the thalamus in healthy subjects and neurological patients, we demonstrate that the extent of agreeability between stereology and FreeSurfer is equal to the agreeability between two human anatomists estimating thalamic volume using stereological methods. Using patients with juvenile myoclonic epilepsy as a model for thalamic atrophy, we also show that both automated and manual methods provide very similar ratios of thalamic volume loss in patients. This work promotes the use of FreeSurfer for reliable estimation of global volume in healthy and diseased thalami.
Research on Controlled Volume Operation Method of Large-scale Water Transfer Canal
Institute of Scientific and Technical Information of China (English)
DING Zhiliang; WANG Changde; XU Duo; XIAO Hua
2011-01-01
The controlled volume method of operation is especially suitable for large-scale water delivery canal system with complex operation requirements. An operating simulation model based on the storage volume control method for multi-reach canal system in series was established. In allusion to the deficiency of existing controlled volume algorithm, the improved controlled volume algorithm of the whole canal pools was proposed, and the simulation results indicated that the storage volume and water level of each canal pool could be accurately controlled after the improved algorithm had been adopted. However, for some typical discharge demand operating conditions, if the previously mentioned algorithm was adopted, then it certainly would cause some unnecessary gate adjustments, and consequently the disturbed canal pools would be increased. Therefore, the idea of controlled volume operation method of continuous canal pools was proposed, and corresponding algorithm was designed. Through simulating practical project, the results indicated that the new controlled volume algorithm proposed for typical operating conditions could comparatively and obviously reduce the number of regulated check gates and disturb canal pools for some typical discharge demand operating conditions, thus the control efficiency of canal system could be improved.
Acer, Niyazi; Ilıca, Ahmet Turan; Turgut, Ahmet Tuncay; Ozçelik, Ozlem; Yıldırım, Birdal; Turgut, Mehmet
2012-01-01
Pineal gland is a very important neuroendocrine organ with many physiological functions such as regulating circadian rhythm. Radiologically, the pineal gland volume is clinically important because it is usually difficult to distinguish small pineal tumors via magnetic resonance imaging (MRI). Although many studies have estimated the pineal gland volume using different techniques, to the best of our knowledge, there has so far been no stereological work done on this subject. The objective of the current paper was to determine the pineal gland volume using stereological methods and by the region of interest (ROI) on MRI. In this paper, the pineal gland volumes were calculated in a total of 62 subjects (36 females, 26 males) who were free of any pineal lesions or tumors. The mean ± SD pineal gland volumes of the point-counting, planimetry, and ROI groups were 99.55 ± 51.34, 102.69 ± 40.39, and 104.33 ± 40.45 mm(3), respectively. No significant difference was found among the methods of calculating pineal gland volume (P > 0.05). From these results, it can be concluded that each technique is an unbiased, efficient, and reliable method, ideally suitable for in vivo examination of MRI data for pineal gland volume estimation.
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods
Energy Technology Data Exchange (ETDEWEB)
1993-08-01
This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.
Evaluation of a simple method for determining muscle volume in vivo.
Infantolino, Benjamin W; Challis, John H
2016-06-14
The quantification in vivo of muscle volume is important, for example, to understand how muscles change with aging, and respond to rehabilitation. Albracht et al. (2008) suggested that muscle volume can be estimated in vivo from the measurement of muscle cross-sectional area and muscle belly length only. The purpose of this study was to evaluate this proposed relationship for determining muscle volume for both the Vastus Lateralis (VL) and First Dorsal Interosseous (FDI) using ultrasound imaging. The cross-sectional area and length of 22 cadaver FDI and 6 VL muscles in cadavers were imaged using ultrasound, these muscles were then dissected and muscle volumes measured directly using the water displacement technique. Estimated muscle volumes were compared with their direct measurement, and for the VL the percentage root mean square error in the estimation of muscle volume was 5.0%, and the Bland-Altman analysis had all volume estimates within the 95% confidence interval, with no evidence of bias (proportional or constant) in the volume estimates. In contrast, percentage root mean square error for the FDI was 18.8%, with the Bland-Altman analysis showing volume estimates outside of the 95% confidence interval and proportional bias. These results indicate that the simple method proposed by Albracht et al. (2008) for the estimation of muscle volume is appropriate the VL but not the FDI using ultrasound imaging. Morphological disparities likely account for these differences, if accurate and fast measures of the volume of the FDI are required other approaches should be explored.
Institute of Scientific and Technical Information of China (English)
LIU Yong-hui; DU Guang-sheng; TAO Li-li; SHEN Fang
2011-01-01
The measurement accuracy of an ultrasonic heat meter depends on the relationship of the profile-linear average velocity.There are various methods for the calculation of the laminar and turbulence flow regions, but few methods for the transition region.At present, the traditional method to deal with the transition region is to adopt the relationship for the turbulent flow region. In this article, a simplified model of the pipe is used to study the characteristics of the transition flow with specific Reynolds number. The k-ε model and the Large Eddy Simulation (LES) model are, respectively, used to calculate the flow field of the transition region,and a comparison with the experiment results shows that the LES model is more effective than the k- ε model, it is also shown that there will be a large error if the relationship based on the turbulence flow is used to calculate the profile-linear average velocity relationship of the transition flow. The profile-linear average velocity for the Reynolds number ranging from 5 300 to 10 000 are calculated, and the relationship curve is obtained. The results of this article can be used to improve the measurement accuracy of ultrasonic heat meter and provide a theoretical basis for the research of the whole transition flow.
Energy Technology Data Exchange (ETDEWEB)
Maringer, F.J. [Bundesversuchs- und Forschungsanstalt Arsenal, Vienna (Austria); Akis, M.C.; Stadtmann, H. [Oesterreichisches Forschungszentrum Seibersdorf GmbH (Austria); Kaineder, H. [Amt der Oberoesterreichischen Landesregierung, Linz (Austria); Kindl, P. [Technische Univ., Graz (Austria); Kralik, C. [Bundesanstalt fuer Lebensmitteluntersuchung und -forschung, Vienna (Austria); Lettner, H.; Winkler, R. [Salzburg Univ. (Austria); Ringer, W. [Salzburg Univ. (Austria)]|[Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)
1998-12-31
Within the Austrian radon mitigation project `SARAH` different methods of radon diagnosis had been used. For these investigations a `Blower-Door` had been employed to apply a low pressure and to look for radon entry paths. On the occasion of the radon sniffing the team got the idea to measure the radon concentration in the Blower-Door exhaust air to get an estimate of the long-term average radon concentration in the building. In this paper the new method and their application possibilities are given. The estimation of the average radon entry rate, the average long-term radon concentration, and the evaluation of the mitigation success are described and discussed. The advantage of this procedure is to obtain a result for the annual mean indoor radon concentration after only about three hours. (orig.) [Deutsch] Im Rahmen des oesterreichischen Radonsanierungsprojekts `SARAH` wurden verschiedene Methoden zur Radondiagnose von Gebaeuden angewandt. Zum raschen Auffinden von Radoneintrittspfaden wurde auch ein `Blower-Door` zur Applikation eines Unterdrucks (-50 Pa) innerhalb der untersuchten Haeuser verwendet. Dabei entsprang die Idee, durch Messung der Radonkonzentration der Blower-Door-Abluft einen Hinweis auf die durchschnittliche Radonkonzentration im Gebaeude zu erhalten. In dieser Arbeit werden die neue Methode und deren Anwendungsmoeglichkeit zur Abschaetzung der mittleren Radoneintrittsrate und der langzeitlich mittleren Radonkonzentrationen (`Jahresmittelwert`) sowie des Sanierungserfolges (Ausmass der Radonreduktion) eines Gebaeudes beschrieben und diskutiert. Der Vorteil der Methode liegt darin, dass innerhalb von etwa drei Stunden Messzeit eine Abschaetzung fuer den Jahresmittelwert der Radonkonzentration eines Gebaeudes vorliegt. (orig.)
Impedance ratio method for urine conductivity-invariant estimation of bladder volume
Directory of Open Access Journals (Sweden)
Thomas Schlebusch
2014-09-01
Full Text Available Non-invasive estimation of bladder volume could help patients with impaired bladder volume sensation to determine the right moment for catheterisation. Continuous, non-invasive impedance measurement is a promising technology in this scenario, although influences of body posture and unknown urine conductivity limit wide clinical use today. We studied impedance changes related to bladder volume by simulation, in-vitro and in-vivo measurements with pigs. In this work, we present a method to reduce the influence of urine conductivity to cystovolumetry and bring bioimpedance cystovolumetry closer to a clinical application.
Energy Technology Data Exchange (ETDEWEB)
Seldin, D.W.; Esser, P.D.; Nichols, A.B.; Ratner, S.J.; Alderson, P.O.
1983-12-01
The utility of a semi-automatic method of measuring left ventricular (LV) volume geometrically from gated blood-pool studies and digital subtraction angiography (DSA) was investigated using computerized edge detection and spatial calibration algorithms. LAO LV volumes determined from gated blood-pool studies were compared to volumes obtained from contrast left ventriculograms in 21 patients and the applicability of this method to DSA was evaluated in 25 additional patients who also had conventional left ventriculography. There was excellent correlation between the two, both for radionuclide studies and for DSA. Computer-based geometric determinations of LV volume appear to be rapid, accurate, and less dependent on subjective operator decisions than previously reported geometric approaches.
Energy Technology Data Exchange (ETDEWEB)
Monroy Anton, J. L.; Solar Tortosa, M.; Lopez Munoz, M.; Navarro Bergada, A.; Estornell Gualde, M. A.; Melchor Iniguez, M.
2013-07-01
Our objective was to evaluate the V20 parameters and dose average compared to a single lung volume designed with a CT study in normal breathing of the patient and the corresponding to a lung volume composed, designed from three studies of CT in different phases of the respiratory cycle. Check if there are important differences in these cases that determine the necessity of creating a composite lung volume to evaluate dose volume histogram. (Author)
Compact high order finite volume method on unstructured grids III: Variational reconstruction
Wang, Qian; Ren, Yu-Xin; Pan, Jianhua; Li, Wanai
2017-05-01
This paper presents a variational reconstruction for the high order finite volume method in solving the two-dimensional Navier-Stokes equations on arbitrary unstructured grids. In the variational reconstruction, an interfacial jump integration is defined to measure the jumps of the reconstruction polynomial and its spatial derivatives on each cell interface. The system of linear equations to determine the reconstruction polynomials is derived by minimizing the total interfacial jump integration in the computational domain using the variational method. On each control volume, the derived equations are implicit relations between the coefficients of the reconstruction polynomials defined on a compact stencil involving only the current cell and its direct face-neighbors. The reconstruction and time integration coupled iteration method proposed in our previous paper is used to achieve high computational efficiency. A problem-independent shock detector and the WBAP limiter are used to suppress non-physical oscillations in the simulation of flow with discontinuities. The advantages of the finite volume method using the variational reconstruction over the compact least-squares finite volume method proposed in our previous papers are higher accuracy, higher computational efficiency, more flexible boundary treatment and non-singularity of the reconstruction matrix. A number of numerical test cases are solved to verify the accuracy, efficiency and shock-capturing capability of the finite volume method using the variational reconstruction.
Directory of Open Access Journals (Sweden)
Hendry Sakke Tira
2014-10-01
Full Text Available Energy supply is a crucial issue in the world in the last few years. The increase in energy demand caused by population growth and resource depletion of world oil reserves provides determination to produce and to use renewable energies. One of the them is biogas. However, until now the use of biogas has not yet been maximized because of its poor purity. According to the above problem, the research has been carried out using the method of water absorption. Under this method it is expected that the rural community is able to apply it. Therefore, their economy and productivity can be increased. This study includes variations of absorbing water volume (V and input biogas volume flow rate (Q. Raw biogas which is flowed into the absorbent will be analyzed according to the determined absorbing water volume and input biogas volume rate. Improvement on biogas composition through the biogas purification method was obtained. The level of CO2 and H2S was reduced significantly specifically in the early minutes of purification process. On the other hand, the level of CH4 was increased improving the quality of raw biogas. However, by the time of biogas purification the composition of purified biogas was nearly similar to the raw biogas. The main reason for this result was an increasing in pH of absorbent. It was shown that higher water volume and slower biogas volume rate obtained better results in reducing the CO2 and H2S and increasing CH4 compared to those of lower water volume and higher biogas volume rate respectively. The purification method has a good promising in improving the quality of raw biogas and has advantages as it is cheap and easy to be operated.
Directory of Open Access Journals (Sweden)
Hendry Sakke Tira
2016-05-01
Full Text Available Energy supply is a crucial issue in the world in the last few years. The increase in energy demand caused by population growth and resource depletion of world oil reserves provides determination to produce and to use renewable energies. One of the them is biogas. However, until now the use of biogas has not yet been maximized because of its poor purity. According to the above problem, the research has been carried out using the method of water absorption. Under this method it is expected that the rural community is able to apply it. Therefore, their economy and productivity can be increased. This study includes variations of absorbing water volume (V and input biogas volume flow rate (Q. Raw biogas which is flowed into the absorbent will be analyzed according to the determined absorbing water volume and input biogas volume rate. Improvement on biogas composition through the biogas purification method was obtained. The level of CO2 and H2S was reduced significantly specifically in the early minutes of purification process. On the other hand, the level of CH4 was increased improving the quality of raw biogas. However, by the time of biogas purification the composition of purified biogas was nearly similar to the raw biogas. The main reason for this result was an increasing in pH of absorbent. It was shown that higher water volume and slower biogas volume rate obtained better results in reducing the CO2 and H2S and increasing CH4 compared to those of lower water volume and higher biogas volume rate respectively. The purification method has a good promising in improving the quality of raw biogas and has advantages as it is cheap and easy to be operated.
Computational Methods for Protein Structure Prediction and Modeling Volume 2: Structure Prediction
Xu, Ying; Liang, Jie
2007-01-01
Volume 2 of this two-volume sequence focuses on protein structure prediction and includes protein threading, De novo methods, applications to membrane proteins and protein complexes, structure-based drug design, as well as structure prediction as a systems problem. A series of appendices review the biological and chemical basics related to protein structure, computer science for structural informatics, and prerequisite mathematics and statistics.
Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids
Directory of Open Access Journals (Sweden)
Sudi Mungkasi
2016-01-01
Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.
A lattice Boltzmann coupled to finite volumes method for solving phase change problems
Directory of Open Access Journals (Sweden)
El Ganaoui Mohammed
2009-01-01
Full Text Available A numerical scheme coupling lattice Boltzmann and finite volumes approaches has been developed and qualified for test cases of phase change problems. In this work, the coupled partial differential equations of momentum conservation equations are solved with a non uniform lattice Boltzmann method. The energy equation is discretized by using a finite volume method. Simulations show the ability of this developed hybrid method to model the effects of convection, and to predict transfers. Benchmarking is operated both for conductive and convective situation dominating solid/liquid transition. Comparisons are achieved with respect to available analytical solutions and experimental results.
Energy Technology Data Exchange (ETDEWEB)
Fazleev, M.P.; Chekhov, O.S.; Ermakov, E.A.
1985-06-20
This paper discusses the results of an investigation of the gas content averaged over the volume, hydrodynamic programs, and foaming in the K/sub 2/O-V/sub 2/O/sub 5/ melt plus gas system, which is used as a catalyst in several thermocatalytic processes. The experimental setup is described and a comparison of literature data on the gas content of different gas-liquid systems under comparable conditions is presented. The authors were able to determine the boundaries of the hydrodynamic modes in a bubbling reactor and derive equations for the calculation of the gas content. It was found that the gas content of the melt increased when V/sub 2/O/sub 5/ was reduced to V/sub 2/O/sub 4/ in the reaction portion of the reaction-regeneration cycle. Regeneration of the melt restores the value of gas content to its original level.
DEFF Research Database (Denmark)
Hattel, Jesper; Hansen, Preben
1995-01-01
This paper presents a novel control volume based FD method for solving the equilibrium equations in terms of displacements, i.e. the generalized Navier equations. The method is based on the widely used cv-FDM solution of heat conduction and fluid flow problems involving a staggered grid formulati...
DEFF Research Database (Denmark)
Hattel, Jesper; Hansen, Preben
1995-01-01
This paper presents a novel control volume based FD method for solving the equilibrium equations in terms of displacements, i.e. the generalized Navier equations. The method is based on the widely used cv-FDM solution of heat conduction and fluid flow problems involving a staggered grid formulation...
Thermal characterization and analysis of microliter liquid volumes using the three-omega method.
Roy-Panzer, Shilpi; Kodama, Takashi; Lingamneni, Srilakshmi; Panzer, Matthew A; Asheghi, Mehdi; Goodson, Kenneth E
2015-02-01
Thermal phenomena in many biological systems offer an alternative detection opportunity for quantifying relevant sample properties. While there is substantial prior work on thermal characterization methods for fluids, the push in the biology and biomedical research communities towards analysis of reduced sample volumes drives a need to extend and scale these techniques to these volumes of interest, which can be below 100 pl. This work applies the 3ω technique to measure the temperature-dependent thermal conductivity and heat capacity of de-ionized water, silicone oil, and salt buffer solution droplets from 24 to 80 °C. Heater geometries range in length from 200 to 700 μm and in width from 2 to 5 μm to accommodate the size restrictions imposed by small volume droplets. We use these devices to measure droplet volumes of 2 μl and demonstrate the potential to extend this technique down to pl droplet volumes based on an analysis of the thermally probed volume. Sensitivity and uncertainty analyses provide guidance for relevant design variables for characterizing properties of interest by investigating the tradeoffs between measurement frequency regime, device geometry, and substrate material. Experimental results show that we can extract thermal conductivity and heat capacity with these sample volumes to within less than 1% of thermal properties reported in the literature.
Energy Technology Data Exchange (ETDEWEB)
Hermeline, F
2008-12-15
This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)
Energy Technology Data Exchange (ETDEWEB)
Le Dez, V.; Lallemand, M. [Ecole Nationale Superieure de Mecanique et d`Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M.; Charette, A. [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees
1996-12-31
The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.
A Finite Volume Method with Unstructured Triangular Grids for Numerical Modeling of Tidal Current
Institute of Scientific and Technical Information of China (English)
SHI Hong-da; LIU zhen
2005-01-01
The finite volume method (FVM) has many advantages in 2-D shallow water numerical simulation. In this study, the finite volume method is used with unstructured triangular grids to simulate the tidal currents. The Roe scheme is applied in the calculation of the intercell numerical flux, and the MUSCL method is introduced to improve its accuracy. The time integral is a two-step scheme of forecast and revision. For the verification of the present method, the Stoker's problem is calculated and the result is compared with the mathematically analytic solutions. The comparison indicates that the method is feasible. A sea area of a port is used as an example to test the method established here. The result shows that the present computational method is satisfactory, and it could be applied to the engineering fields.
An Unstructured Finite Volume Method for Impact Dynamics of a Thin Plate
Institute of Scientific and Technical Information of China (English)
Weidong Chen; Yanchun Yu
2012-01-01
The examination of an unstructured finite volume method for structural dynamics is assessed for simulations of systematic impact dynamics.A robust display dual-time stepping method is utilized to obtain time accurate solutions.The study of impact dynamics is a complex problem that should consider strength models and state equations to describe the mechanical behavior of materials.The current method has several features.1) Discrete equations of unstructured finite volume method naturally follow the conservation law.2)Display dual-time stepping method is suitable for the analysis of impact dynamic problems of time accurate solutions.3) The method did not produce grid distortion when large deformation appeared.The method is validated by the problem of impact dynamics of an elastic plate with initial conditions and material properties.The results validate the finite element numerical data.
A simple, quantitative method using alginate gel to determine rat colonic tumor volume in vivo.
Irving, Amy A; Young, Lindsay B; Pleiman, Jennifer K; Konrath, Michael J; Marzella, Blake; Nonte, Michael; Cacciatore, Justin; Ford, Madeline R; Clipson, Linda; Amos-Landgraf, James M; Dove, William F
2014-04-01
Many studies of the response of colonic tumors to therapeutics use tumor multiplicity as the endpoint to determine the effectiveness of the agent. These studies can be greatly enhanced by accurate measurements of tumor volume. Here we present a quantitative method to easily and accurately determine colonic tumor volume. This approach uses a biocompatible alginate to create a negative mold of a tumor-bearing colon; this mold is then used to make positive casts of dental stone that replicate the shape of each original tumor. The weight of the dental stone cast correlates highly with the weight of the dissected tumors. After refinement of the technique, overall error in tumor volume was 16.9% ± 7.9% and includes error from both the alginate and dental stone procedures. Because this technique is limited to molding of tumors in the colon, we utilized the Apc(Pirc/+) rat, which has a propensity for developing colonic tumors that reflect the location of the majority of human intestinal tumors. We have successfully used the described method to determine tumor volumes ranging from 4 to 196 mm³. Alginate molding combined with dental stone casting is a facile method for determining tumor volume in vivo without costly equipment or knowledge of analytic software. This broadly accessible method creates the opportunity to objectively study colonic tumors over time in living animals in conjunction with other experiments and without transferring animals from the facility where they are maintained.
Arun, K. R.; Kraft, M.; Lukáčová-Medvid'ová, M.; Prasad, Phoolan
2009-02-01
We present a generalization of the finite volume evolution Galerkin scheme [M. Lukáčová-Medvid'ová, J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533- 562; M. Lukáčová-Medvid'ová, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.
Simulation of pore pressure accumulation under cyclic loading using Finite Volume Method
DEFF Research Database (Denmark)
Tang, Tian; Hededal, Ole
2014-01-01
This paper presents a finite volume implementation of a porous, nonlinear soil model capable of simulating pore pressure accumulation under cyclic loading. The mathematical formulations are based on modified Biot’s coupled theory by substituting the original elastic constitutive model with an adv...... mapping algorithm is used to calculate the stress and strain relation in each control volume level. Test cases show very good performance of the model.......This paper presents a finite volume implementation of a porous, nonlinear soil model capable of simulating pore pressure accumulation under cyclic loading. The mathematical formulations are based on modified Biot’s coupled theory by substituting the original elastic constitutive model...... with an advanced elastoplastic model suitable for describing monotonic as well as cyclic loading conditions. The finite volume method is applied to discretize these formulations. The resulting set of coupled nonlinear algebraic equations are then solved by a ’segregated’ solution procedure. An efficient return...
A stencil-like volume of fluid (VOF) method for tracking free interface
Institute of Scientific and Technical Information of China (English)
LI Xiao-wei; FAN Jun-fei
2008-01-01
A stencil-like volume of fluid (VOF) method is proposed for tracking free interface. A stencil on a grid cell is worked out according to the normal direction of the interface, in which only three interface positions are possible in 2D cases, and the interface can be reconstructed by only requiring the known local volume fraction information. On the other hand, the fluid-occupying-length is defined on each side of the stencil, through which a unified fluid-occupying volume model and a unified algorithm can be obtained to solve the interface advection equation. The method is suitable for the arbitrary geometry of the grid cell, and is extendible to 3D cases. Typical numerical examples show that the current method can give "sharp" results for tracking free interface.
The finite volume local evolution Galerkin method for solving the hyperbolic conservation laws
Sun, Yutao; Ren, Yu-Xin
2009-07-01
This paper presents a finite volume local evolution Galerkin (FVLEG) scheme for solving the hyperbolic conservation laws. The FVLEG scheme is the simplification of the finite volume evolution Galerkin method (FVEG). In FVEG, a necessary step is to compute the dependent variables at cell interfaces at tn + τ (0 FVEG. The FVLEG scheme greatly simplifies the evaluation of the numerical fluxes. It is also well suited with the semi-discrete finite volume method, making the flux evaluation being decoupled with the reconstruction procedure while maintaining the genuine multi-dimensional nature of the FVEG methods. The derivation of the FVLEG scheme is presented in detail. The performance of the proposed scheme is studied by solving several test cases. It is shown that FVLEG scheme can obtain very satisfactory numerical results in terms of accuracy and resolution.
Directory of Open Access Journals (Sweden)
Yankui Sun
2016-03-01
Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.
Seo, Mansu; Park, Hana; Yoo, DonGyu; Jung, Youngsuk; Jeong, Sangkwon
Gauging the volume or mass of liquid propellant of a rocket vehicle in space is an important issue for its economic feasibility and optimized design of loading mass. Pressure-volume-temperature (PVT) gauging method is one of the most suitable measuring techniques in space due to its simplicity and reliability. This paper presents unique experimental results and analyses of PVT gauging method using liquid nitrogen under microgravity condition by parabolic flight. A vacuum-insulated and cylindrical-shaped liquid nitrogen storage tank with 9.2 L volume is manufactured by observing regulation of parabolic flight. PVT gauging experiments are conducted under low liquid fraction condition from 26% to 32%. Pressure, temperature, and the injected helium mass into the storage tank are measured to obtain the ullage volume by gas state equation. Liquid volume is finally derived by the measured ullage volume and the known total tank volume. Two sets of parabolic flights are conducted and each set is composed of approximately 10 parabolic flights. In the first set of flights, the short initial waiting time (3 ∼ 5 seconds) cannot achieve sufficient thermal equilibrium condition at the beginning. It causes inaccurate gauging results due to insufficient information of the initial helium partial pressure in the tank. The helium injection after 12 second waiting time at microgravity condition with high mass flow rate in the second set of flights achieves successful initial thermal equilibrium states and accurate measurement results of initial helium partial pressure. Liquid volume measurement errors in the second set are within 11%.
Kang, Namgoo; Jung, Min-Ho; Jeong, Hyun-Cheol; Lee, Yung-Seop
2015-06-01
The general sample standard deviation and the Monte-Carlo methods as an estimate of confidence interval is frequently being used for estimates of uncertainties with regard to greenhouse gas emission, based on the critical assumption that a given data set follows a normal (Gaussian) or statistically known probability distribution. However, uncertainty estimated using those methods are severely limited in practical applications where it is challenging to assume the probability distribution of a data set or where the real data distribution form appears to deviate significantly from statistically known probability distribution models. In order to solve these issues encountered especially in reasonable estimation of uncertainty about the average of greenhouse gas emission, we present two statistical methods, the pooled standard deviation method (PSDM) and the standardized-t bootstrap method (STBM) based upon statistical theories. We also report interesting results of the uncertainties about the average of a data set of methane (CH4) emission from rice cultivation under the four different irrigation conditions in Korea, measured by gas sampling and subsequent gas analysis. Results from the applications of the PSDM and the STBM to these rice cultivation methane emission data sets clearly demonstrate that the uncertainties estimated by the PSDM were significantly smaller than those by the STBM. We found that the PSDM needs to be adopted in many cases where a data probability distribution form appears to follow an assumed normal distribution with both spatial and temporal variations taken into account. However, the STBM is a more appropriate method widely applicable to practical situations where it is realistically impossible with the given data set to reasonably assume or determine a probability distribution model with a data set showing evidence of fairly asymmetric distribution but severely deviating from known probability distribution models.
A new high-order finite volume method for 3D elastic wave simulation on unstructured meshes
Zhang, Wensheng; Zhuang, Yuan; Zhang, Lina
2017-07-01
In this paper, we proposed a new efficient high-order finite volume method for 3D elastic wave simulation on unstructured tetrahedral meshes. With the relative coarse tetrahedral meshes, we make subdivision in each tetrahedron to generate a stencil for the high-order polynomial reconstruction. The subdivision algorithm guarantees the number of subelements is greater than the degrees of freedom of a complete polynomial. We perform the reconstruction on this stencil by using cell-averaged quantities based on the hierarchical orthonormal basis functions. Unlike the traditional high-order finite volume method, our new method has a very local property like DG and can be written as an inner-split computational scheme which is beneficial to reducing computational amount. Moreover, the stencil in our method is easy to generate for all tetrahedrons especially in the three-dimensional case. The resulting reconstruction matrix is invertible and remains unchanged for all tetrahedrons and thus it can be pre-computed and stored before time evolution. These special advantages facilitate the parallelization and high-order computations. We show convergence results obtained with the proposed method up to fifth order accuracy in space. The high-order accuracy in time is obtained by the Runge-Kutta method. Comparisons between numerical and analytic solutions show the proposed method can provide accurate wavefield information. Numerical simulation for a realistic model with complex topography demonstrates the effectiveness and potential applications of our method. Though the method is proposed based on the 3D elastic wave equation, it can be extended to other linear hyperbolic system.
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Institute of Scientific and Technical Information of China (English)
Min YANG
2008-01-01
The author considers a thermal convection problem with infinite Prandtl number in two or three dimensions. The mathematical model of such problem is described as an initial boundary value problem made up of three partial differential equations. One equation of the convection-dominated diffusion type for the temperature, and another two of the Stokes type for the normalized velocity and pressure. The approximate solution is obtained by a penalty finite volume method for the Stokes equation and a multistep upwind finite volume method for the convection-diffusion equation. Under suitable smoothness of the exact solution, error estimates in some discrete norms are derived.
Computational Methods for Protein Structure Prediction and Modeling Volume 1: Basic Characterization
Xu, Ying; Liang, Jie
2007-01-01
Volume one of this two volume sequence focuses on the basic characterization of known protein structures as well as structure prediction from protein sequence information. The 11 chapters provide an overview of the field, covering key topics in modeling, force fields, classification, computational methods, and struture prediction. Each chapter is a self contained review designed to cover (1) definition of the problem and an historical perspective, (2) mathematical or computational formulation of the problem, (3) computational methods and algorithms, (4) performance results, (5) existing software packages, and (6) strengths, pitfalls, challenges, and future research directions.
Hybrid Finite Element and Volume Integral Methods for Scattering Using Parametric Geometry
DEFF Research Database (Denmark)
Volakis, John L.; Sertel, Kubilay; Jørgensen, Erik
2004-01-01
n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integral method are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....
High-Order Spectral Volume Method for 2D Euler Equations
Wang, Z. J.; Zhang, Laiping; Liu, Yen; Kwak, Dochan (Technical Monitor)
2002-01-01
The Spectral Volume (SV) method is extended to the 2D Euler equations. The focus of this paper is to study the performance of the SV method on multidimensional non-linear systems. Implementation details including total variation diminishing (TVD) and total variation bounded (TVB) limiters are presented. Solutions with both smooth features and discontinuities are utilized to demonstrate the overall capability of the SV method.
Small Volume Dissolution Testing as a Powerful Method during Pharmaceutical Development
Directory of Open Access Journals (Sweden)
Eric Beyssac
2010-11-01
Full Text Available Standard compendia dissolution apparatus are the first choice for development of new dissolution methods. Nevertheless, limitations coming from the amount of material available, analytical sensitivity, lack of discrimination or biorelevance may warrant the use of non compendial methods. In this regard, the use of small volume dissolution methods offers strong advantages. The present study aims primarily to evaluate the dissolution performance of various drug products having different release mechanisms, using commercially available small volume USP2 dissolution equipment. The present series of tests indicate that the small volume dissolution is a useful tool for the characterization of immediate release drug product. Depending on the release mechanism, different speed factors are proposed to mimic common one liter vessel performance. In addition, by increasing the discriminating power of the dissolution method, it potentially improves know how about formulations and on typical events which are evaluated during pharmaceutical development such as ageing or scale–up. In this regard, small volume dissolution is a method of choice in case of screening for critical quality attributes of rapidly dissolving tablets, where it is often difficult to detect differences using standard working conditions.
Well-balanced finite volume evolution Galerkin methods for the shallow water equations
Lukáčová-Medvid'ová, M.; Noelle, S.; Kraft, M.
2007-01-01
We present a new well-balanced finite volume method within the framework of the finite volume evolution Galerkin (FVEG) schemes. The methodology will be illustrated for the shallow water equations with source terms modelling the bottom topography and Coriolis forces. Results can be generalized to more complex systems of balance laws. The FVEG methods couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of multidimensional hyperbolic systems, such that all of the infinitely many directions of wave propagation are taken into account explicitly. We derive a well-balanced approximation of the integral equations and prove that the FVEG scheme is well-balanced for the stationary steady states as well as for the steady jets in the rotational frame. Several numerical experiments for stationary and quasi-stationary states as well as for steady jets confirm the reliability of the well-balanced FVEG scheme.
A numerical study of 2D detonation waves with adaptive finite volume methods on unstructured grids
Hu, Guanghui
2017-02-01
In this paper, a framework of adaptive finite volume solutions for the reactive Euler equations on unstructured grids is proposed. The main ingredients of the algorithm include a second order total variation diminishing Runge-Kutta method for temporal discretization, and the finite volume method with piecewise linear solution reconstruction of the conservative variables for the spatial discretization in which the least square method is employed for the reconstruction, and weighted essentially nonoscillatory strategy is used to restrain the potential numerical oscillation. To resolve the high demanding on the computational resources due to the stiffness of the system caused by the reaction term and the shock structure in the solutions, the h-adaptive method is introduced. OpenMP parallelization of the algorithm is also adopted to further improve the efficiency of the implementation. Several one and two dimensional benchmark tests on the ZND model are studied in detail, and numerical results successfully show the effectiveness of the proposed method.
Li, Xianfeng; Snyder, James A; Stuart, Steven J; Latour, Robert A
2015-10-14
The recently developed "temperature intervals with global exchange of replicas" (TIGER2) accelerated sampling method is found to have inaccuracies when applied to systems with explicit solvation. This inaccuracy is due to the energy fluctuations of the solvent, which cause the sampling method to be less sensitive to the energy fluctuations of the solute. In the present work, the problem of the TIGER2 method is addressed in detail and a modification to the sampling method is introduced to correct this problem. The modified method is called "TIGER2 with solvent energy averaging," or TIGER2A. This new method overcomes the sampling problem with the TIGER2 algorithm and is able to closely approximate Boltzmann-weighted sampling of molecular systems with explicit solvation. The difference in performance between the TIGER2 and TIGER2A methods is demonstrated by comparing them against analytical results for simple one-dimensional models, against replica exchange molecular dynamics (REMD) simulations for sampling the conformation of alanine dipeptide and the folding behavior of (AAQAA)3 peptide in aqueous solution, and by comparing their performance in sampling the behavior of hen egg-white lysozyme in aqueous solution. The new TIGER2A method solves the problem caused by solvent energy fluctuations in TIGER2 while maintaining the two important characteristics of TIGER2, i.e., (1) using multiple replicas sampled at different temperature levels to help systems efficiently escape from local potential energy minima and (2) enabling the number of replicas used for a simulation to be independent of the size of the molecular system, thus providing an accelerated sampling method that can be used to efficiently sample systems considered too large for the application of conventional temperature REMD.
Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.
2014-12-01
A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.
Topology optimization of heat conduction problems using the finite volume method
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole
2006-01-01
This note addresses the use of the finite volume method (FVM) for topology optimization of a heat conduction problem. Issues pertaining to the proper choice of cost functions, sensitivity analysis and example test problems are used to illustrate the effect of applying the FVM as an analysis tool...... checkerboards do not form during the topology optimization process....
Institute of Scientific and Technical Information of China (English)
Hong-ying Man; Zhong-ci Shi
2006-01-01
In this paper, we discuss the finite volume element method of P1-nonconforming quadrilateral element for elliptic problems and obtain optimal error estimates for general quadrilateral partition. An optimal cascadic multigrid algorithm is proposed to solve the nonsymmetric large-scale system resulting from such discretization. Numerical experiments are reported to support our theoretical results.
An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...
Wind deficit model in a wind farm using finite volume method
DEFF Research Database (Denmark)
Soleimanzadeh, Maryam; Wisniewski, Rafal
2010-01-01
A wind deficit model for wind farms is developed in this work using finite volume method. The main question addressed here is to calculate approximately the wind speed in the vicinity of each wind turbine of a farm. The procedure followed is to solve the governing equations of flow for the whole ...
DEFF Research Database (Denmark)
Thorborg, Jesper
The objective of this thesis has been to improve and further develop the existing staggered grid control volume formulation of the thermomechanical equations. During the last ten years the method has proven to be efficient and accurate even for calculation on large structures. The application of ...
The Meshfree Finite Volume Method with application to multi-phase porous media models
Foy, Brody H.; Perré, Patrick; Turner, Ian
2017-03-01
Numerical methods form a cornerstone of the analysis and investigation of mathematical models for physical processes. Many classical numerical schemes rely on the application of strict meshing structures to generate accurate solutions, which in some applications are an infeasible constraint. Within this paper we outline a new meshfree numerical scheme, which we call the Meshfree Finite Volume Method (MFVM). The MFVM uses interpolants to approximate fluxes in a disjoint finite volume scheme, allowing for the accurate solution of strong-form PDEs. We present a derivation of the MFVM, and give error bounds on the spatial and temporal approximations used within the scheme. We present a wide variety of applications of the method, showing key features, and advantages over traditional meshed techniques. We close with an application of the method to a non-linear multi-phase wood drying model, showing the potential for solving numerically challenging problems.
Method and system for determining a volume of an object from two-dimensional images
Abercrombie, Robert K [Knoxville, TN; Schlicher, Bob G [Portsmouth, NH
2010-08-10
The invention provides a method and a computer program stored in a tangible medium for automatically determining a volume of three-dimensional objects represented in two-dimensional images, by acquiring at two least two-dimensional digitized images, by analyzing the two-dimensional images to identify reference points and geometric patterns, by determining distances between the reference points and the component objects utilizing reference data provided for the three-dimensional object, and by calculating a volume for the three-dimensional object.
Volume Dispersion of Point Sets and Quasi-Monte Carlo Methods
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Measures of irregularity of a point set or sequence, such as discrepancy and dispersion, play a central role in quasi-Monte Carlo methods. In this paper, we introduce and study a new measure of irregularity, called volume dispersion. It is a measure of deviation of point sets from the uniform distribution. We then generalize the concept of volume dispersion to more general cases as measures of representation of point sets for general probability distributions. Various relations among these measures and the traditional discrepancy and dispersion are investigated.
An extension to the Luscher's finite volume method above inelastic threashold (formalism)
Ishii, Noriyoshi
2010-01-01
An extension of the Luscher's finite volume method above inelastic thresholds is proposed. It is fulfilled by extendind the procedure recently proposed by HAL-QCD Collaboration for a single channel system. Focusing on the asymptotic behaviors of the Nambu-Bethe-Salpeter (NBS) wave functions (equal-time) near spatial infinity, a coupled channel extension of effective Schrodinger equation is constructed by introducing an energy-independent interaction kernel. Because the NBS wave functions contain the information of T-matrix at long distance, S-matrix can be obtained by solving the coupled channel effective Schrodinger equation in the infinite volume.
Finite Volume Evolution Galerkin Methods for the Shallow Water Equations with Dry Beds
Bollermann, Andreas; Noelle, Sebastian; Medvidová, Maria Lukáčová -
2015-01-01
We present a new Finite Volume Evolution Galerkin (FVEG) scheme for the solution of the shallow water equations (SWE) with the bottom topography as a source term. Our new scheme will be based on the FVEG methods presented in (Luk\\'a\\v{c}ov\\'a, Noelle and Kraft, J. Comp. Phys. 221, 2007), but adds the possibility to handle dry boundaries. The most important aspect is to preserve the positivity of the water height. We present a general approach to ensure this for arbitrary finite volume schemes...
Directory of Open Access Journals (Sweden)
Arheden Håkan
2011-04-01
Full Text Available Abstract Background Functional and morphological changes of the heart influence blood flow patterns. Therefore, flow patterns may carry diagnostic and prognostic information. Three-dimensional, time-resolved, three-directional phase contrast cardiovascular magnetic resonance (4D PC-CMR can image flow patterns with unique detail, and using new flow visualization methods may lead to new insights. The aim of this study is to present and validate a novel visualization method with a quantitative potential for blood flow from 4D PC-CMR, called Volume Tracking, and investigate if Volume Tracking complements particle tracing, the most common visualization method used today. Methods Eight healthy volunteers and one patient with a large apical left ventricular aneurysm underwent 4D PC-CMR flow imaging of the whole heart. Volume Tracking and particle tracing visualizations were compared visually side-by-side in a visualization software package. To validate Volume Tracking, the number of particle traces that agreed with the Volume Tracking visualizations was counted and expressed as a percentage of total released particles in mid-diastole and end-diastole respectively. Two independent observers described blood flow patterns in the left ventricle using Volume Tracking visualizations. Results Volume Tracking was feasible in all eight healthy volunteers and in the patient. Visually, Volume Tracking and particle tracing are complementary methods, showing different aspects of the flow. When validated against particle tracing, on average 90.5% and 87.8% of the particles agreed with the Volume Tracking surface in mid-diastole and end-diastole respectively. Inflow patterns in the left ventricle varied between the subjects, with excellent agreement between observers. The left ventricular inflow pattern in the patient differed from the healthy subjects. Conclusion Volume Tracking is a new visualization method for blood flow measured by 4D PC-CMR. Volume Tracking
Directory of Open Access Journals (Sweden)
Joko Siswantoro
2014-11-01
Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.
ACARP Project C10059. ACARP manual of modern coal testing methods. Volume 1: The manual
Energy Technology Data Exchange (ETDEWEB)
Sakurovs, R.; Creelman, R.; Pohl, J.; Juniper, L. [CSIRO Energy Technology, Sydney, NSW (Australia)
2002-07-01
The Manual summarises the purpose, applicability, and limitations of a range of standard and modern coal testing methods that have potential to assist the coal company technologist to better evaluate coal performance. The first volume sets out the Modern Coal Testing Methods in summarised form that can be used as a quick guide to practitioners to assist in selecting the best technique to solve their problems.
A new volume conservation enforcement method for PLIC reconstruction in general convex grids
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2016-07-01
A comprehensive study is made of methods for resolving the volume conservation enforcement problem in the PLIC reconstruction of an interface in general 3D convex grids. Different procedures to bracket the solution when solving the problem using previous standard methods are analyzed in detail. A new interpolation bracketing procedure and an improved analytical method to find the interface plane constant are proposed. These techniques are combined in a new method to enforce volume conservation, which does not require the sequential polyhedra truncation operations typically used in standard methods. The new methods have been implemented into existing geometrical routines described in López and Hernández [15], which are further improved by using more efficient formulae to compute areas and volumes of general convex 2 and 3D polytopes. Different tests using regular and irregular cell geometries are carried out to demonstrate the robustness and substantial improvement in computational efficiency of the proposed techniques, which increase the computation speed of the mentioned routines by up to 3 times for the 3D problems considered in this work.
Huang, Song; Peng, Chien Y; Li, Zhao-Yu; Barth, Aaron J
2016-01-01
Many recent observations and numerical simulations suggest that nearby massive, early-type galaxies were formed through a "two-phase" process. In the proposed second phase, the extended stellar envelope was accumulated through many dry mergers. However, details of the past merger history of present-day ellipticals, such as the typical merger mass ratio, are difficult to constrain observationally. Within the context and assumptions of the two-phase formation scenario, we propose a straightforward method, using photometric data alone, to estimate the average mass ratio of mergers that contributed to the build-up of massive elliptical galaxies. We study a sample of nearby massive elliptical galaxies selected from the Carnegie-Irvine Galaxy Survey, using two-dimensional analysis to decompose their light distribution into an inner, denser component plus an extended, outer envelope, each having a different optical color. The combination of these two substructures accurately recovers the negative color gradient exhi...
Chen, Feiyu; Bakic, Predrag R; Maidment, Andrew D A; Jensen, Shane T; Shi, Xiquan; Pokrajac, David D
2015-10-01
A modification to our previous simulation of breast anatomy is proposed to improve the quality of simulated x-ray projections images. The image quality is affected by the voxel size of the simulation. Large voxels can cause notable spatial quantization artifacts; small voxels extend the generation time and increase the memory requirements. An improvement in image quality is achievable without reducing voxel size by the simulation of partial volume averaging in which voxels containing more than one simulated tissue type are allowed. The linear x-ray attenuation coefficient of voxels is, thus, the sum of the linear attenuation coefficients weighted by the voxel subvolume occupied by each tissue type. A local planar approximation of the boundary surface is employed. In the two-material case, the partial volume in each voxel is computed by decomposition into up to four simple geometric shapes. In the three-material case, by application of the Gauss-Ostrogradsky theorem, the 3D partial volume problem is converted into one of a few simpler 2D surface area problems. We illustrate the benefits of the proposed methodology on simulated x-ray projections. An efficient encoding scheme is proposed for the type and proportion of simulated tissues in each voxel. Monte Carlo simulation was used to evaluate the quantitative error of our approximation algorithms.
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
A HIGH RESOLUTION FINITE VOLUME METHOD FOR SOLVING SHALLOW WATER EQUATIONS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A high-resolution finite volume numerical method for solving the shallow water equations is developed in this paper. In order to extend finite difference TVD scheme to finite volume method, a new geometry and topology of control bodies is defined by considering the corresponding relationships between nodes and elements. This solver is implemented on arbitrary quadrilateral meshes and their satellite elements, and based on a second-order hybrid type of TVD scheme in space discretization and a two-step Runge-Kutta method in time discretization. Then it is used to deal with two typical dam-break problems and very satisfactory results are obtained comparied with other numerical solutions. It can be considered as an efficient implement for the computation of shallow water problems, especially concerning those having discontinuities, subcritical and supercritical flows and complex geometries.
A New Class of Non-Linear, Finite-Volume Methods for Vlasov Simulation
Energy Technology Data Exchange (ETDEWEB)
Banks, J W; Hittinger, J A
2009-11-24
Methods for the numerical discretization of the Vlasov equation should efficiently use the phase space discretization and should introduce only enough numerical dissipation to promote stability and control oscillations. A new high-order, non-linear, finite-volume algorithm for the Vlasov equation that discretely conserves particle number and controls oscillations is presented. The method is fourth-order in space and time in well-resolved regions, but smoothly reduces to a third-order upwind scheme as features become poorly resolved. The new scheme is applied to several standard problems for the Vlasov-Poisson system, and the results are compared with those from other finite-volume approaches, including an artificial viscosity scheme and the Piecewise Parabolic Method. It is shown that the new scheme is able to control oscillations while preserving a higher degree of fidelity of the solution than the other approaches.
FINITE VOLUME METHOD FOR SIMULATION OF VISCOELASTIC FLOW THROUGH A EXPANSION CHANNEL
Institute of Scientific and Technical Information of China (English)
FU Chun-quan; JIANG Hai-mei; YIN Hong-jun; SU Yu-chi; ZENG Ye-ming
2009-01-01
A finite volume method for the numerical solution of viscoelastic flows is given. The flow of a differential Upper-Convected Maxwell (UCM) fluid through an abrupt expansion has been chosen as a prototype example. The conservation and constitutive equations are solved using the Finite Volume Method (FVM) in a staggered grid with an upwind scheme for the viscoelastic stresses and a hybrid scheme for the velocities. An enhanced-in-speed pressure-correction algorithm is used and a method for handling the source term in the momentum equations is employed. Improved accuracy is achieved by a special discretization of the boundary conditions. Stable solutions are obtained for higher Weissenberg number (We), further extending the range of simulations with the FVM. Numerical results show the viscoelasticity of polymer solutions is the main factor influencing the sweep efficiency.
Comparison among methods for the assessment of deadwood volume in a former holm oak coppice
Directory of Open Access Journals (Sweden)
Bianchi L
2013-04-01
Full Text Available Comparison among methods for the assessment of deadwood volume in a former holm oak coppice. The paper aims to compare three methods for the assessment of deadwood volume, i.e., LIS (Line Intersect System, FAS (Fixed Area Sampling, and WM (Weighings Method. The control data are represented by the outputs of Xylometric measurement. The study was carried out in a former holm oak (Quercus ilex L. coppice located in the nature park of Montioni in southern Tuscany. LIS and FAS overestimated significantly the quantity of deadwood (+12% and +50%, respectively. The error become higher as the minimum threshold sampling increases. The WM, besides the operational complexity of its application, led to more promising and precise results.
Application of the control volume mixed finite element method to a triangular discretization
Naff, R.L.
2012-01-01
A two-dimensional control volume mixed finite element method is applied to the elliptic equation. Discretization of the computational domain is based in triangular elements. Shape functions and test functions are formulated on the basis of an equilateral reference triangle with unit edges. A pressure support based on the linear interpolation of elemental edge pressures is used in this formulation. Comparisons are made between results from the standard mixed finite element method and this control volume mixed finite element method. Published 2011. This article is a US Government work and is in the public domain in the USA. ?? 2012 John Wiley & Sons, Ltd. This article is a US Government work and is in the public domain in the USA.
Quinlan, Nathan J.; Lobovský, Libor; Nestor, Ruairi M.
2014-06-01
The Finite Volume Particle Method (FVPM) is a meshless method based on a definition of interparticle area which is closely analogous to cell face area in the classical finite volume method. In previous work, the interparticle area has been computed by numerical integration, which is a source of error and is extremely expensive. We show that if the particle weight or kernel function is defined as a discontinuous top-hat function, the particle interaction vectors may be evaluated exactly and efficiently. The new formulation reduces overall computational time by a factor between 6.4 and 8.2. In numerical experiments on a viscous flow with an analytical solution, the method converges under all conditions. Significantly, in contrast with standard FVPM and SPH, error depends on particle size but not on particle overlap (as long as the computational domain is completely covered by particles). The new method is shown to be superior to standard FVPM for shock tube flow and inviscid steady transonic flow. In benchmarking on a viscous multiphase flow application, FVPM with exact interparticle area is shown to be competitive with a mesh-based volume-of-fluid solver in terms of computational time required to resolve the structure of an interface.
A spatial discretization of the MHD equations based on the finite volume - spectral method
Energy Technology Data Exchange (ETDEWEB)
Miyoshi, Takahiro [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
2000-05-01
Based on the finite volume - spectral method, we present new discretization formulae for the spatial differential operators in the full system of the compressible MHD equations. In this approach, the cell-centered finite volume method is adopted in a bounded plane (poloidal plane), while the spectral method is applied to the differential with respect to the periodic direction perpendicular to the poloidal plane (toroidal direction). Here, an unstructured grid system composed of the arbitrary triangular elements is utilized for constructing the cell-centered finite volume method. In order to maintain the divergence free constraint of the magnetic field numerically, only the poloidal component of the rotation is defined at three edges of the triangular element. This poloidal component is evaluated under the assumption that the toroidal component of the operated vector times the radius, RA{sub {phi}}, is linearly distributed in the element. The present method will be applied to the nonlinear MHD dynamics in an realistic torus geometry without the numerical singularities. (author)
Institute of Scientific and Technical Information of China (English)
Xu Qianghong; Yan Jing; Cai Guolong; Chen Jin; Li Li; Hu Caibao
2014-01-01
Background Few studies have reported the effect of different volume responsiveness evaluation methods on volume therapy results and prognosis.This study was carried out to investigate the effect of two volume responsiveness evaluation methods,stroke volume variation (SW) and stroke volume changes before and after passive leg raising (PLR-ASV),on fluid resuscitation and prognosis in septic shock patients.Methods Septic shock patients admitted to the Department of Critical Care Medicine of Zhejiang Hospital,China,from March 2011 to March 2013,who were under controlled ventilation and without arrhythmia,were studied.Patients were randomly assigned to the SVV group or the PLR-ASV group.The SVV group used the Pulse Indication Continuous Cardiac Output monitoring of SW,and responsiveness was defined as SW-＞12％.The PLR-ASV group used ASV before and after PLR as the indicator,and responsiveness was defined as ASV ＞15％.Six hours after fluid resuscitation,changes in tissue perfusion indicators (lactate,lactate clearance rate,central venous oxygen saturation (SCVO2),base excess (BE)),organ function indicators (white blood cell count,neutrophil percentage,platelet count,total protein,albumin,alanine aminotransferase,total and direct bilirubin,blood urea nitrogen,serum creatinine,serum creatine kinase,oxygenation index),fluid balance (6-and 24-hour fluid input) and the use of cardiotonic drugs (dobutamine),prognostic indicators (the time and rate of achieving early goal-directed therapy (EGDT) standards,duration of mechanical ventilation and intensive care unit stay,and 28-day mortality) were observed.Results Six hours after fluid resuscitation,there were no significant differences in temperature,heart rate,blood pressure,SpO2,organ function indicators,or tissue perfusion indicators between the two groups (P ＞0.06).The 6-and 24-hour fluid input was slightly less in the SW group than in the PLR-ASV group,but the difference was not statistically significant (P ＞0
Leung, Hoi Tik Alvin; Bignucolo, Olivier; Aregger, Regula; Dames, Sonja A; Mazur, Adam; Bernèche, Simon; Grzesiek, Stephan
2016-01-12
Flexible polypeptides such as unfolded proteins may access an astronomical number of conformations. The most advanced simulations of such states usually comprise tens of thousands of individual structures. In principle, a comparison of parameters predicted from such ensembles to experimental data provides a measure of their quality. In practice, analyses that go beyond the comparison of unbiased average data have been impossible to carry out on the entirety of such very large ensembles and have, therefore, been restricted to much smaller subensembles and/or nondeterministic algorithms. Here, we show that such very large ensembles, on the order of 10(4) to 10(5) conformations, can be analyzed in full by a maximum entropy fit to experimental average data. Maximizing the entropy of the population weights of individual conformations under experimental χ(2) constraints is a convex optimization problem, which can be solved in a very efficient and robust manner to a unique global solution even for very large ensembles. Since the population weights can be determined reliably, the reweighted full ensemble presents the best model of the combined information from simulation and experiment. Furthermore, since the reduction of entropy due to the experimental constraints is well-defined, its value provides a robust measure of the information content of the experimental data relative to the simulated ensemble and an indication for the density of the sampling of conformational space. The method is applied to the reweighting of a 35,000 frame molecular dynamics trajectory of the nonapeptide EGAAWAASS by extensive NMR (3)J coupling and RDC data. The analysis shows that RDCs provide significantly more information than (3)J couplings and that a discontinuity in the RDC pattern at the central tryptophan is caused by a cluster of helical conformations. Reweighting factors are moderate and consistent with errors in MD force fields of less than 3kT. The required reweighting is larger for
Energy Technology Data Exchange (ETDEWEB)
Yassi, Nawaf; Campbell, Bruce C.V.; Davis, Stephen M.; Bivard, Andrew [The University of Melbourne, Departments of Medicine and Neurology, Melbourne Brain Centre rate at The Royal Melbourne Hospital, Parkville, Victoria (Australia); Moffat, Bradford A.; Steward, Christopher; Desmond, Patricia M. [The University of Melbourne, Department of Radiology, The Royal Melbourne Hospital, Parkville (Australia); Churilov, Leonid [The University of Melbourne, The Florey Institute of Neurosciences and Mental Health, Parkville (Australia); Parsons, Mark W. [University of Newcastle and Hunter Medical Research Institute, Priority Research Centre for Translational Neuroscience and Mental Health, Newcastle (Australia)
2015-07-15
Longitudinal brain volume changes have been investigated in a number of cerebral disorders as a surrogate marker of clinical outcome. In stroke, unique methodological challenges are posed by dynamic structural changes occurring after onset, particularly those relating to the infarct lesion. We aimed to evaluate agreement between different analysis methods for the measurement of post-stroke brain volume change, and to explore technical challenges inherent to these methods. Fifteen patients with anterior circulation stroke underwent magnetic resonance imaging within 1 week of onset and at 1 and 3 months. Whole-brain as well as grey- and white-matter volume were estimated separately using both an intensity-based and a surface watershed-based algorithm. In the case of the intensity-based algorithm, the analysis was also performed with and without exclusion of the infarct lesion. Due to the effects of peri-infarct edema at the baseline scan, longitudinal volume change was measured as percentage change between the 1 and 3-month scans. Intra-class and concordance correlation coefficients were used to assess agreement between the different analysis methods. Reduced major axis regression was used to inspect the nature of bias between measurements. Overall agreement between methods was modest with strong disagreement between some techniques. Measurements were variably impacted by procedures performed to account for infarct lesions. Improvements in volumetric methods and consensus between methodologies employed in different studies are necessary in order to increase the validity of conclusions derived from post-stroke cerebral volumetric studies. Readers should be aware of the potential impact of different methods on study conclusions. (orig.)
Connection method of separated luminal regions of intestine from CT volumes
Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Hirooka, Yoshiki; Goto, Hidemi; Mori, Kensaku
2015-03-01
This paper proposes a connection method of separated luminal regions of the intestine for Crohn's disease diagnosis. Crohn's disease is an inflammatory disease of the digestive tract. Capsule or conventional endoscopic diagnosis is performed for Crohn's disease diagnosis. However, parts of the intestines may not be observed in the endoscopic diagnosis if intestinal stenosis occurs. Endoscopes cannot pass through the stenosed parts. CT image-based diagnosis is developed as an alternative choice of the Crohn's disease. CT image-based diagnosis enables physicians to observe the entire intestines even if stenosed parts exist. CAD systems for Crohn's disease using CT volumes are recently developed. Such CAD systems need to reconstruct separated luminal regions of the intestines to analyze intestines. We propose a connection method of separated luminal regions of the intestines segmented from CT volumes. The luminal regions of the intestines are segmented from a CT volume. The centerlines of the luminal regions are calculated by using a thinning process. We enumerate all the possible sequences of the centerline segments. In this work, we newly introduce a condition using distance between connected ends points of the centerline segments. This condition eliminates unnatural connections of the centerline segments. Also, this condition reduces processing time. After generating a sequence list of the centerline segments, the correct sequence is obtained by using an evaluation function. We connect the luminal regions based on the correct sequence. Our experiments using four CT volumes showed that our method connected 6.5 out of 8.0 centerline segments per case. Processing times of the proposed method were reduced from the previous method.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Yoshizawa, Akira; Nisizima, Shoiti; Shimomura, Yutaka; Kobayashi, Hiromichi; Matsuo, Yuichi; Abe, Hiroyuki; Fujiwara, Hitoshi
2006-03-01
A new methodology for the Reynolds-averaged Navier-Stokes modeling is presented on the basis of the amalgamation of heuristic-modeling and turbulence-theory methods. A characteristic turbulence time scale is synthesized in a heuristic manner through the combination of several characteristic time scales. An algebraic model of turbulent-viscosity type for the Reynolds stress is derived from the Reynolds-stress transport equation with the time scale embedded. It is applied to the state of weak spatial and temporal nonequilibrium, and is compared with its theoretical counterpart derived by the two-scale direct-interaction approximation. The synthesized time scale is justified through the agreement of the two expressions derived by these entirely different methods. The derived model is tested in rotating isotropic, channel, and homogeneous-shear flows. It is extended to a nonlinear algebraic model and a supersonic model. The latter is shown to succeed in reproducing the reduction in the growth rate of a free-shear layer flow, without causing wrong effects on wall-bounded flows such as channel and boundary-layer flows.
An hybrid finite volume finite element method for variable density incompressible flows
Calgaro, Caterina; Creusé, Emmanuel; Goudon, Thierry
2008-04-01
This paper is devoted to the numerical simulation of variable density incompressible flows, modeled by the Navier-Stokes system. We introduce an hybrid scheme which combines a finite volume approach for treating the mass conservation equation and a finite element method to deal with the momentum equation and the divergence free constraint. The breakthrough relies on the definition of a suitable footbridge between the two methods, through the design of compatibility condition. In turn, the method is very flexible and allows to deal with unstructured meshes. Several numerical tests are performed to show the scheme capabilities. In particular, the viscous Rayleigh-Taylor instability evolution is carefully investigated.
Generalized source Finite Volume Method for radiative transfer equation in participating media
Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min
2017-03-01
Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.
Directory of Open Access Journals (Sweden)
Sarvesh Kumar
2014-01-01
Full Text Available The incompressible miscible displacement problem in porous media is modeled by a coupled system of two nonlinear partial differential equations, the pressure-velocity equation and the concentration equation. In this paper, we present a mixed finite volume element method (FVEM for the approximation of the pressure-velocity equation. Since modified method of characteristics (MMOC minimizes the grid orientation effect, for the approximation of the concentration equation, we apply a standard FVEM combined with MMOC. A priori error estimates in L∞(L2 norm are derived for velocity, pressure and concentration. Numerical results are presented to substantiate the validity of the theoretical results.
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O
Methods to assess area and volume of wounds - a systematic review
DEFF Research Database (Denmark)
Joergensen, Line Bisgaard
2016-01-01
. The six approaches for measuring wound area were simple ruler method (10 papers), mathematical models (5 papers), manual planimetry (10 papers), digital planimetry (16 papers), stereophotogrammetry (2 papers) and digital imaging method (20 papers). Of these studies, 10 evaluated accuracy, 15 agreement, 17...... described since 1994. Studies were identified by searching the electronic databases PubMed, Embase and Cochrane Library. Of the 12 013 studies identified, 43 were included in the review. A total of 30 papers evaluated techniques for measuring wound area and 13 evaluated techniques for measuring wound volume...... reliability and 25 mentioned feasibility. The number of wounds examined in the studies was highly variable (n = 3-260). Studies evaluating techniques for measuring wound volume included between 1 and 50 wounds and evaluated accuracy (4 studies), agreement (6 studies), reliability (8 studies) and feasibility...
INTERVAL FINITE VOLUME METHOD FOR UNCERTAINTY SIMULATION OF TWO-DIMENSIONAL RIVER WATER QUALITY
Institute of Scientific and Technical Information of China (English)
HE Li; ZENG Guang-ming; HUANG Guo-he; LU Hong-wei
2004-01-01
Under the interval uncertainties, by incorporating the discretization form of finite volume method and interval algebra theory, an Interval Finite Volume Method (IFVM) was developed to solve water quality simulation issues for two-dimensional river when lacking effective data of flow velocity and flow quantity. The IFVM was practically applied to a segment of the Xiangjiang River because the Project of Hunan Inland Waterway Multipurpose must be started working after the environmental impact assessment for it. The simulation results suggest that there exist rather apparent pollution zones of BOD5 downstream the Dongqiaogang discharger and that of COD downstream Xiaoxiangjie discharger, but the pollution sources have no impact on the safety of the three water plants located in this river segment. Although the developed IFVM is to be perfected, it is still a powerful tool under interval uncertainties for water environmental impact assessment, risk analysis, and water quality planning, etc. besides water quality simulation studied in this paper.
Finite volume evolution Galerkin (FVEG) methods for three-dimensional wave equation system
Lukácová-Medvid'ová, Maria; Warnecke, Gerald; Zahaykah, Yousef
2004-01-01
The subject of the paper is the derivation of finite volume evolution Galerkin schemes for three-dimensional wave equation system. The aim is to construct methods which take into account all of the infinitely many directions of propagation of bicharacteristics. The idea is to evolve the initial function using the characteristic cone and then to project onto a finite element space. Numerical experiments are presented to demonstrate the accuracy and the multidimensional behaviour of the solutio...
A novel method for the evaluation of uncertainty in dose volume histogram computation
Cutanda-Henriquez, Francisco
2007-01-01
Dose volume histograms are a useful tool in state-of-the-art radiotherapy planning, and it is essential to be aware of their limitations. Dose distributions computed by treatment planning systems are affected by several sources of uncertainty such as algorithm limitations, measurement uncertainty in the data used to model the beam and residual differences between measured and computed dose, once the model is optimized. In order to take into account the effect of uncertainty, a probabilistic approach is proposed and a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal or greater than a certain value is found using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a relationship is given for practical computations. This method is applied to a set of dose volume histograms for different regions of interest for 6 brain pat...
Unstructured finite volume method for water impact on a rigid body
Institute of Scientific and Technical Information of China (English)
YU Yan; MING Ping-jian; DUAN Wen-yang
2014-01-01
A new method is presented for the water impact simulation, in which the air-water two phase flow is solved using the pressure-based computational fluid dynamics method. Theoretically, the air effects can be taken into account in the water structure interaction. The key point of this method is the air-water interface capture, which is treated as a physical discontinuity and can be captured by a well-designed high order scheme. According to a normalized variable diagram, a high order discrete scheme on unstructured grids is realised, so a numerical method for the free surface flow on a fixed grid can be established. This method is implemented using an in-house code, the General Transport Equation Analyzer, which is an unstructured grid finite volume solver. The method is verified with the wedge water and structure interaction problem.
An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE
Baysal, Oktay; Lessard, Victor R.
1990-01-01
The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.
Methods for Improving Volume Stability of Steel Slag as Fine Aggregate
Institute of Scientific and Technical Information of China (English)
LUN Yunxia; ZilOU Mingkai; CAI Xiao; XU Fang
2008-01-01
Suitable methods for enhancing the volume stability of steel slag utilized as fine aggregate were determined. The effects of steam treatment at 100 ℃ and autoclave treatment under 2.0 Mpa on the soundness of steel slag sand were investigated by means of powder ratio, linear expansion, compressive and flexural strength. DTA, EDX, XRD and ethylene glycol methods were employed to analyze both the treated slags and susceptible expansion grains. Experimental results indicate that powder ratio, content of free lime and rate of linear expansion can express the improvement in volume stability of different treated methods. Steam treatment process cannot ultimately prevent specimens from cracking and decrease of strength, but mortar made from autoclave treated slag keeps integration subjected to hot water of 80 ℃ until 28 d and its strength do not show significant decrement. The hydration of over-burn free lime and periclase phase are the main cause for the disintegration or crack of untreated and steam treated steel slag's specimens. Autoclave treatment process is more effective than steam treatment process on enhancement of volume stability of steel slag.
Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras
2002-01-01
Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...
Pre-sale law for railway ticket based on moving average method%基于移动平均法的铁路客票预售规律研究
Institute of Scientific and Technical Information of China (English)
刘彦麟; 吕晓艳; 王洪业
2016-01-01
This article studied and analyzed the pre-sale law for the period of 60-days. There is a certain relationship between the number of seats in the train and the number of days in advance. Different travel dates, focusing on sale day different. In the non holiday peak period, the pre-sale volume is large on the ifrst day of the train and on the day of the train. This article, used the moving average method to study this kind of rule, predicted the daily pre-sale situation, focused on the forecast results of pre-sale volume on the ifrst day of the train and on the day of the train. The forecast results were in good agreement with the actual passenger lfow.%本文通过对预售期为60天的预售规律进行研究分析，列车席位的预售量和预售天数有一定的关系。不同的乘车日期，重点预售日不同。非节假日高峰，乘车前一日和乘车当日，预售量较大。本文利用移动平均法对这种规律进行研究，对日常预售情况进行预测，重点分析了乘车前一日和乘车当日的预售量预测结果，预测结果和实际客流吻合较好。
Mignone, A
2014-01-01
High-order reconstruction schemes for the solution of hyperbolic conservation laws in orthogonal curvilinear coordinates are revised in the finite volume approach. The formulation employs a piecewise polynomial approximation to the zone-average values to reconstruct left and right interface states from within a computational zone to arbitrary order of accuracy by inverting a Vandermonde-like linear system of equations with spatially varying coefficients. The approach is general and can be used on uniform and non-uniform meshes although explicit expressions are derived for polynomials from second to fifth degree in cylindrical and spherical geometries with uniform grid spacing. It is shown that, in regions of large curvature, the resulting expressions differ considerably from their Cartesian counterparts and that the lack of such corrections can severely degrade the accuracy of the solution close to the coordinate origin. Limiting techniques and monotonicity constraints are revised for conventional reconstruct...
Volume of Fluid (VOF) type advection methods in two-phase flow: a comparative study
Aniszewski, Wojciech; Marek, Maciej
2014-01-01
In this paper, four distinct approaches to Volume of Fluid (VOF) computational method are compared. Two of the methods are the 'simplified' VOF formulations, in that they do not require geometrical interface reconstruction. The assessment is made possible by implementing all four approaches into the same code as a switchable options. This allows to rule out possible influence of other parts of numerical scheme, be it the discretisation of Navier-Stokes equations or chosen approximation of curvature, so that we are left with conclusive arguments because only one factor differs the compared methods. The comparison is done in the framework of CLSVOF (Coupled Level Set Volume of Fluid), so that all four methods are coupled with Level Set interface, which is used to compute pressure jump via the GFM (Ghost-Fluid Method). Results presented include static advections, full N-S solutions in laminar and turbulent flows. The paper is aimed at research groups who are implementing VOF methods in their computations or inte...
Evaluating curvature for the volume of fluid method via interface reconstruction
Evrard, Fabien; Denner, Fabian; van Wachem, Berend
2016-11-01
The volume of fluid method (VOF) is widely adopted for the simulation of interfacial flows. A critical step in VOF modelling is to evaluate the local mean curvature of the fluid interface for the computation of surface tension. Most existing curvature evaluation techniques exhibit errors due to the discrete nature of the field they are dealing with, and potentially to the smoothing of this field that the method might require. This leads to the production of inaccurate or unphysical results. We present a curvature evaluation method which aims at greatly reducing these errors. The interface is reconstructed from the volume fraction field and the curvature is evaluated by fitting local quadric patches onto the resulting triangulation. The patch that best fits the triangulated interface can be found by solving a local minimisation problem. Combined with a partition of unity strategy with compactly supported radial basis functions, the method provides a semi-global implicit expression for the interface from which curvature can be exactly derived. The local mean curvature is then integrated back on the Eulerian mesh. We show a detailed analysis of the associated errors and comparisons with existing methods. The method can be extended to unstructured meshes. Financial support from Petrobras is gratefully acknowledged.
A large volume striped bass egg incubation chamber: design and comparison with a traditional method
Harper, C.J.
2009-01-01
I conducted a comparative study of a new jar design (experimental chamber) with a standard egg incubation vessel (McDonald jar). Experimental chambers measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. McDonald hatching jars measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. Post-hatch survival was estimated at 48, 96 and 144 h. Stocking rates resulted in an average egg density of 21.9 eggs ml-1 (range = 21.6 – 22.1) for McDonald jars and 10.9 eggs ml-1 (range = 7.0 – 16.8) for experimental chambers. I was unable to detect an effect of container type on survival to 48, 96 or 144 h. At 144 h striped bass fry survival averaged 37.3% for McDonald jars and 34.2% for experimental chambers. Survival among replicates was significantly different. Survival of striped bass significantly decreased between 96 and 144 h. Mean survival among replicates ranged from 12.4 to 57.3%. I was unable to detect an effect of initial stocking density on survival. Experimental jars allow for incubation of a larger number of eggs in a much smaller space. As hatchery production is often limited by space or water supply, experimental chambers offer an alternative to extending spawning activities, thereby reducing manpower and cost. However, the increase in the number of eggs per rearing container does increase the risk associated with catastrophic loss of a production unit. I conclude the experimental chamber is suitable for striped bass egg incubation.
Monai, Toshiharu; Takano, Ichiro; Nishikawa, Hisao; Sawada, Yoshio
In this paper, the modified Euler type Moving Average Prediction (EMAP) model is proposed in order to operate a dispersed power supply system using new energy in autonomous mode. Furthermore, EMAP model is applied to operate a new type dispersed power supply system consisting of a large scale photovoltaic system (PV), a fuel cell (FC) as well as a small scale superconducting magnetic energy storage system (SMES). This distributed power supply system can meet the multi-quality electric power requirements of customers, and ensures voltage stability and UPS (Uninterruptible Power Supply) function as well. Each sub-system of this distributed power supply contributes to the above-mentioned system performance with its own excellent characteristics. Moreover, response characteristics of this system are confirmed with simulation by software PSIM, and, under collaborative operation methods by EMAP model, the required capacity of SMES to compensate the fluctuation of both PV output and load demand is examined by the simulation using software MATLAB/Simulink.
Lattice QCD studies on baryon interactions from L\\"uscher's finite volume method and HAL QCD method
Iritani, Takumi
2015-01-01
A comparative study between the L\\"uscher's finite volume method and the time-dependent HAL QCD method is given for the $\\Xi\\Xi$($^1\\mathrm{S}_0$) interaction as an illustrative example. By employing the smeared source and the wall source for the interpolating operators, we show that the effective energy shifts $\\Delta E_{\\rm eff} (t)$ in L\\"uscher's method do not agree between different sources, yet both exhibit fake plateaux. On the other hand, the interaction kernels $V(\\vec{r})$ obtained from the two sources in the HAL QCD method agree with each other already for modest values of $t$. We show that the energy eigenvalues $\\Delta E(L)$ in finite lattice volumes ($L^3$) calculated by $V(\\vec{r})$ indicate that there is no bound state in the $\\Xi\\Xi(^1\\mathrm{S}_0)$ channel at $m_{\\pi}=0.51$ GeV in 2+1 flavor QCD.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity
Alternating Direction Finite Volume Element Methods for Three-Dimensional Parabolic Equations
Institute of Scientific and Technical Information of China (English)
Tongke
2010-01-01
This paper presents alternating direction finite volume element methods for three-dimensional parabolic partial differential equations and gives four computational schemes, one is analogous to Douglas finite difference scheme with second-order splitting error, the other two schemes have third-order splitting error, and the last one is an extended LOD scheme. The L2 norm and H1 semi-norm error estimates are obtained for the first scheme and second one, respectively. Finally, two numerical examples are provided to illustrate the efficiency and accuracy of the methods.
Capillary method for measuring near-infrared spectra of microlitre volume liquids
Institute of Scientific and Technical Information of China (English)
YUAN Bo; MURAYAMA Koichi
2007-01-01
The present study theoretically explored the feasibility of the capillary method for measuring near-infrared (NIR) spectra of liquid or solution samples with microlitre volume, which was proposed in our previous studies. Lambert-Beer absorbance rule was applied to establish a model for the integral absorbance of capillary, which was then implemented in numerical analyses of the effects of capillary on various spectral features and dynamic range of absorption measurement. The theoretical speculations indicated that the capillary method might be used in NIR spectroscopy, which was further supported by the empirical data collected from our experiments by comparison between capillary NIR spectra of several organic solvents and cuvette cell NIR spectra.
Institute of Scientific and Technical Information of China (English)
GAO Wei; DUAN Ya-li; LIU Ru-xun
2009-01-01
In this article a finite volume method is proposed to solve viscous incompressible Navier-Stokes equations in two-dimensional regions with corners and curved boundaries. A hybrid collocated-grid variable arrangement is adopted, in which the velocity and pressure are stored at the centroid and the circumcenters of the triangular control cell, respectively. The cell flux is defined at the mid-point of the cell face. Second-order implicit time integration schemes are used for convection and diffusion terms. The second-order upwind scheme is used for convection fluxes. The present method is validated by results of several viscous flows.
Waszak, M. R.; Schmidt, D. S.
1985-01-01
As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.
Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.
2001-01-16
Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of
Hybrid Spectral Difference/Embedded Finite Volume Method for Conservation Laws
Choi, Jung J
2014-01-01
A novel hybrid spectral difference/embedded finite volume method is introduced in order to apply a discontinuous high-order method for large scale engineering applications involving discontinuities in flows with complex geometries. In the proposed hybrid approach, structured finite volume (FV) cells are embedded in hexahedral SD elements containing discontinuities, and FV based high-order shock-capturing scheme is employed to overcome Gibbs phenomenon. Thus, discontinuities are captured at the resolution of embedded FV cells within an SD element. In smooth flow regions, the SD method is chosen for its low numerical dissipation and computational efficiency preserving spectral-like solutions. The coupling between the SD elements and the elements with embedded FV cells are achieved by the mortar method. In this paper, the 5th-order WENO scheme with characteristic decomposition is employed as the shock-capturing scheme in the embedded FV cells, and the 5th-order SD method is used in the smooth flow field. The ord...
Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I
Energy Technology Data Exchange (ETDEWEB)
Goheen, S.C.; McCulloch, M.; Daniel, J.L.
1993-05-01
This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.
Stenroos, M; Mäntynen, V; Nenonen, J
2007-12-01
The boundary element method (BEM) is commonly used in the modeling of bioelectromagnetic phenomena. The Matlab language is increasingly popular among students and researchers, but there is no free, easy-to-use Matlab library for boundary element computations. We present a hands-on, freely available Matlab BEM source code for solving bioelectromagnetic volume conduction problems and any (quasi-)static potential problems that obey the Laplace equation. The basic principle of the BEM is presented and discretization of the surface integral equation for electric potential is worked through in detail. Contents and design of the library are described, and results of example computations in spherical volume conductors are validated against analytical solutions. Three application examples are also presented. Further information, source code for application examples, and information on obtaining the library are available in the WWW-page of the library: (http://biomed.tkk.fi/BEM).
A Monte Carlo method for critical systems in infinite volume: the planar Ising model
Herdeiro, Victor
2016-01-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Monte Carlo method for critical systems in infinite volume: The planar Ising model.
Herdeiro, Victor; Doyon, Benjamin
2016-10-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
The enhanced volume source boundary point method for the calculation of acoustic radiation problem
Institute of Scientific and Technical Information of China (English)
WANG Xiufeng; CHEN Xinzhao; WANG Youcheng
2003-01-01
The Volume Source Boundary Point Method (VSBPM) is greatly improved so that it will speed up the VSBPM's solution of the acoustic radiation problem caused by the vibrating body. The fundamental solution provided by Helmholtz equation is enforced in a weighted residual sense over a tetrahedron located on the normal line of the boundary node to replace the coefficient matrices of the system equation. Through the enhanced volume source boundary point analysis of various examples and the sound field of a vibrating rectangular box in a semi-anechoic chamber, it has revealed that the calculating speed of the EVSBPM is more than 10 times faster than that of the VSBPM while it works on the aspects of its calculating precision and stability, adaptation to geometric shape of vibrating body as well as its ability to overcome the non-uniqueness problem.
A Mixed Finite Volume Element Method for Flow Calculations in Porous Media
Jones, Jim E.
1996-01-01
A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.
Finite-volume Hamiltonian method for coupled channel interactions in lattice QCD
Wu, Jia-Jun; Thomas, A W; Young, R D
2014-01-01
Within a multi-channel formulation of $\\pi\\pi$ scattering, we investigate the use of the finite-volume Hamiltonian approach to relate lattice QCD spectra to scattering observables. The equivalence of the Hamiltonian approach and the coupled-channel extension of the well-known L\\"uscher formalism is established. Unlike the single channel system, the spectra at a single lattice volume in the coupled channel case do not uniquely determine the scattering parameters. We investigate the use of the Hamiltonian framework as a method to directly fit the lattice spectra and thereby extract the scattering phase shifts and inelasticities. We find that with a modest amount of lattice data, the scattering parameters can be reproduced rather well, with only a minor degree of model dependence.
Česenek, Jan
The article is concerned with the numerical simulation of the compressible turbulent flow in time dependent domains. The mathematical model of flow is represented by the system of non-stationary Reynolds- Averaged Navier-Stokes (RANS) equations. The motion of the domain occupied by the fluid is taken into account with the aid of the ALE (Arbitrary Lagrangian-Eulerian) formulation of the RANS equations. This RANS system is equipped with two-equation k - ω turbulence model. These two systems of equations are solved separately. Discretization of the RANS system is carried out by the space-time discontinuous Galerkin method which is based on piecewise polynomial discontinuous approximation of the sought solution in space and in time. Discretization of the two-equation k - ω turbulence model is carried out by the implicit finite volume method, which is based on piecewise constant approximation of the sought solution. We present some numerical experiments to demonstrate the applicability of the method using own-developed code.
Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru
2016-10-11
An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T1-weighted images (3D-T1WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.
1984-09-17
Structural Lugs 10 6.00 TETM TESX, MARI LOCKHEED L.0 GRUP IIhA AND 2Rii * 2~~~~.0 .RUPINI .01 .05 1 .2 .5. 9 99PROABLTY F*ý,r 1-40 4oprsno R ato nTs rga...monitor loads and perform failsafe functions . A sinewave function generator provides load commands to the servo loop and a calibrated amplitude measurement...Simple Compounding Solution o 2-D Cracked Finite Element Procedure o Green’s Function Method 0 3-D Cracked Finite Element PrTocedure Parameters and
Institute of Scientific and Technical Information of China (English)
Sutthisak Phongthanapanich; Pramote Dechaumphai
2011-01-01
Level set methods are widely used for predicting evolutions of complex free surface topologies,such as the crystal and crack growth,bubbles and droplets deformation,spilling and breaking waves,and two-phase flow phenomena.This paper presents a characteristic level set equation which is derived from the two-dimensional level set equation by using the characteristic-based scheme.An explicit finite volume element method is developed to discretize the equation on triangular grids.Several examples are presented to demonstrate the performance of the proposed method for calculating interface evolutions in time.The proposed level set method is also coupled with the Navier-Stokes equations for two-phase immiscible incompressible flow analysis with surface tension.The Rayleigh-Taylor instability problem is used to test and evaluate the effectiveness of the proposed scheme.
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Second-order accurate finite volume method for well-driven flows
Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan
2013-01-01
We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.
DEFF Research Database (Denmark)
Thorborg, Jesper
of the method has been focused on high temperature processes such as casting and welding and the interest of using nonlinear constitutive stress-strain relations has grown to extend the applicability of the method. The work of implementing classical plasticity into the control volume formulation has been based...... on the $J_2$ flow theory describing an isotropic hardening material with a temperature dependent yield stress. This work has successfully been verified by comparing results to analytical solutions. Due to the comprehensive implementation in the staggered grid an alternative constitutive stress......-strain relation has been suggested. The intention of this method is to provide fast numerical results with reasonable accuracy in relation to the first order effects of the presented classical plasticity model. Application of the $J_2$ flow theory and the alternative method have shown some agreement...
Energy Technology Data Exchange (ETDEWEB)
Schuhbaeck, Annika; Achenbach, Stephan [University of Erlangen, Department of Cardiology, Erlangen (Germany); Dey, Damini [Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles (United States); Otaki, Yuka; Slomka, Piotr; Berman, Daniel S. [Cedars-Sinai Medical Center, Department of Imaging and Medicine, Los Angeles (United States); Kral, Brian G.; Lai, Shenghan [Johns Hopkins University, Department of Medicine, Devision of Cardiology, Baltimore (United States); Fishman, Elliott K.; Lai, Hong [Johns Hopkins University, Department of Medicine, Devision of Cardiology, Baltimore (United States); Johns Hopkins University, Department of Radiology, Baltimore (United States)
2014-09-15
Quantitative measurements of coronary plaque volume may play a role in serial studies to determine disease progression or regression. Our aim was to evaluate the interscan reproducibility of quantitative measurements of coronary plaque volumes using a standardized automated method. Coronary dual source computed tomography angiography (CTA) was performed twice in 20 consecutive patients with known coronary artery disease within a maximum time difference of 100 days. The total plaque volume (TP), the volume of non-calcified plaque (NCP) and calcified plaque (CP) as well as the maximal remodelling index (RI) were determined using automated software. Mean TP volume was 382.3 ± 236.9 mm{sup 3} for the first and 399.0 ± 247.3 mm{sup 3} for the second examination (p = 0.47). There were also no significant differences for NCP volumes, CP volumes or RI. Interscan correlation of the plaque volumes was very good (Pearson's correlation coefficients: r = 0.92, r = 0.90 and r = 0.96 for TP, NCP and CP volumes, respectively). Automated software is a time-saving method that allows accurate assessment of coronary atherosclerotic plaque volumes in coronary CTA with high reproducibility. With this approach, serial studies appear to be possible. (orig.)
Casanova, Ramon; Espeland, Mark A; Goveas, Joseph S; Davatzikos, Christos; Gaussoin, Sarah A; Maldjian, Joseph A; Brunner, Robert L; Kuller, Lewis H; Johnson, Karen C; Mysiw, W Jerry; Wagner, Benjamin; Resnick, Susan M
2011-05-01
Use of conjugated equine estrogens (CEE) has been linked to smaller regional brain volumes in women aged ≥65 years; however, it is unknown whether this results in a broad-based characteristic pattern of effects. Structural magnetic resonance imaging was used to assess regional volumes of normal tissue and ischemic lesions among 513 women who had been enrolled in a randomized clinical trial of CEE therapy for an average of 6.6 years, beginning at ages 65-80 years. A multivariate pattern analysis, based on a machine learning technique that combined Random Forest and logistic regression with L(1) penalty, was applied to identify patterns among regional volumes associated with therapy and whether patterns discriminate between treatment groups. The multivariate pattern analysis detected smaller regional volumes of normal tissue within the limbic and temporal lobes among women that had been assigned to CEE therapy. Mean decrements ranged as high as 7% in the left entorhinal cortex and 5% in the left perirhinal cortex, which exceeded the effect sizes reported previously in frontal lobe and hippocampus. Overall accuracy of classification based on these patterns, however, was projected to be only 54.5%. Prescription of CEE therapy for an average of 6.6 years is associated with lower regional brain volumes, but it does not induce a characteristic spatial pattern of changes in brain volumes of sufficient magnitude to discriminate users and nonusers. Copyright © 2011 Elsevier Inc. All rights reserved.
Modeling of electrical impedance tomography to detect breast cancer by finite volume methods
Ain, K.; Wibowo, R. A.; Soelistiono, S.
2017-05-01
The properties of the electrical impedance of tissue are an interesting study, because changes of the electrical impedance of organs are related to physiological and pathological. Both physiological and pathological properties are strongly associated with disease information. Several experiments shown that the breast cancer has a lower impedance than the normal breast tissue. Thus, the imaging based on impedance can be used as an alternative equipment to detect the breast cancer. This research carries out by modelling of Electrical Impedance Tomography to detect the breast cancer by finite volume methods. The research includes development of a mathematical model of the electric potential field by 2D Finite Volume Method, solving the forward problem and inverse problem by linear reconstruction method. The scanning is done by 16 channel electrode with neighbors method to collect data. The scanning is performed at a frequency of 10 kHz and 100 kHz with three objects numeric includes an anomaly at the surface, an anomaly at the depth and an anomaly at the surface and at depth. The simulation has been successfully to reconstruct image of functional anomalies of the breast cancer at the surface position, the depth position or a combination of surface and the depth.
Simulation of Jetting in Injection Molding Using a Finite Volume Method
Directory of Open Access Journals (Sweden)
Shaozhen Hua
2016-05-01
Full Text Available In order to predict the jetting and the subsequent buckling flow more accurately, a three dimensional melt flow model was established on a viscous, incompressible, and non-isothermal fluid, and a control volume-based finite volume method was employed to discretize the governing equations. A two-fold iterative method was proposed to decouple the dependence among pressure, velocity, and temperature so as to reduce the computation and improve the numerical stability. Based on the proposed theoretical model and numerical method, a program code was developed to simulate melt front progress and flow fields. The numerical simulations for different injection speeds, melt temperatures, and gate locations were carried out to explore the jetting mechanism. The results indicate the filling pattern depends on the competition between inertial and viscous forces. When inertial force exceeds the viscous force jetting occurs, then it changes to a buckling flow as the viscous force competes over the inertial force. Once the melt contacts with the mold wall, the melt filling switches to conventional sequential filling mode. Numerical results also indicate jetting length increases with injection speed but changes little with melt temperature. The reasonable agreements between simulated and experimental jetting length and buckling frequency imply the proposed method is valid for jetting simulation.
Evaluation of bias-correction methods for ensemble streamflow volume forecasts
Directory of Open Access Journals (Sweden)
T. Hashino
2007-01-01
Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.
A high volume extraction and purification method for recovering DNA from human bone.
Marshall, Pamela L; Stoljarova, Monika; Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce
2014-09-01
DNA recovery, purity and overall extraction efficiency of a protocol employing a novel silica-based column, Hi-Flow(®) (Generon Ltd., Maidenhead, UK), were compared with that of a standard organic DNA extraction methodology. The quantities of DNA recovered by each method were compared by real-time PCR and quality of DNA by STR typing using the PowerPlex(®) ESI 17 Pro System (Promega Corporation, Madison, WI) on DNA from 10 human bone samples. Overall, the Hi-Flow method recovered comparable quantities of DNA ranging from 0.8ng±1 to 900ng±159 of DNA compared with the organic method ranging from 0.5ng±0.9 to 855ng±156 of DNA. Complete profiles (17/17 loci tested) were obtained for at least one of three replicates for 3/10 samples using the Hi-Flow method and from 2/10 samples with the organic method. All remaining bone samples yielded partial profiles for all replicates with both methods. Compared with a standard organic DNA isolation method, the results indicated that the Hi-Flow method provided equal or improved recovery and quality of DNA without the harmful effects of organic extraction. Moreover, larger extraction volumes (up to 20mL) can be employed with the Hi-Flow method which enabled more bone sample to be extracted at one time. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods
Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.
2017-09-01
Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Kathryn S. [The Ohio State University College of Medicine, Columbus, OH (United States); Long, Frederick R. [Nationwide Children' s Hospital, The Children' s Radiological Institute, Columbus, OH (United States); Flucke, Robert L. [Nationwide Children' s Hospital, Department of Pulmonary Medicine, Columbus, OH (United States); Castile, Robert G. [The Research Institute at Nationwide Children' s Hospital, Center for Perinatal Research, Columbus, OH (United States)
2010-10-15
Lung inflation and respiratory motion during chest CT affect diagnostic accuracy and reproducibility. To describe a simple volume-monitored (VM) method for performing reproducible, motion-free full inspiratory and end expiratory chest CT examinations in children. Fifty-two children with cystic fibrosis (mean age 8.8 {+-} 2.2 years) underwent pulmonary function tests and inspiratory and expiratory VM-CT scans (1.25-mm slices, 80-120 kVp, 16-40 mAs) according to an IRB-approved protocol. The VM-CT technique utilizes instruction from a respiratory therapist, a portable spirometer and real-time documentation of lung volume on a computer. CT image quality was evaluated for achievement of targeted lung-volume levels and for respiratory motion. Children achieved 95% of vital capacity during full inspiratory imaging. For end expiratory scans, 92% were at or below the child's end expiratory level. Two expiratory exams were judged to be at suboptimal volumes. Two inspiratory (4%) and three expiratory (6%) exams showed respiratory motion. Overall, 94% of scans were performed at optimal volumes without respiratory motion. The VM-CT technique is a simple, feasible method in children as young as 4 years to achieve reproducible high-quality full inspiratory and end expiratory lung CT images. (orig.)
Finite-volume Hamiltonian method for $\\pi\\pi$ scattering in lattice QCD
Wu, Jia-Jun; Leinweber, Derek B; Thomas, A W; Young, Ross D
2015-01-01
Within a formulation of $\\pi\\pi$ scattering, we investigate the use of the finite-volume Hamiltonian approach to resolving scattering observables from lattice QCD spectra. We consider spectra in the centre-of-mass and moving frames for both S- and P-wave cases. Furthermore, we investigate the multi-channel case. Here we study the use of the Hamiltonian framework as a parametrization that can be fit directly to lattice spectra. Through this method, the hadron properties, such as mass, width and coupling, can be directly extracted from the lattice spectra.
Simulation of viscous flows using a multigrid-control volume finite element method
Energy Technology Data Exchange (ETDEWEB)
Hookey, N.A. [Memorial Univ., Newfoundland (Canada)
1994-12-31
This paper discusses a multigrid control volume finite element method (MG CVFEM) for the simulation of viscous fluid flows. The CVFEM is an equal-order primitive variables formulation that avoids spurious solution fields by incorporating an appropriate pressure gradient in the velocity interpolation functions. The resulting set of discretized equations is solved using a coupled equation line solver (CELS) that solves the discretized momentum and continuity equations simultaneously along lines in the calculation domain. The CVFEM has been implemented in the context of both FMV- and V-cycle multigrid algorithms, and preliminary results indicate a five to ten fold reduction in execution times.
Applying the dynamic cone penetrometer (DCP) design method to low volume roads
CSIR Research Space (South Africa)
Paige-Green, P
2011-07-01
Full Text Available in one hand and assessing the ?cohesion?. At OMC (damp) the material can be squeezed into a ?sausage? that remains intact. In the very dry state (less than about 25% of OMC), the material is dusty and loose and has absolutely no cohesion. In the dry... state (about 50% of OMC), the material will have no cohesion P. Paige-Green / Applying the Dynamic Cone Penetrometer Design Method to Low Volume Roads 423 when squeezed into a sausage whereas in the moist state (about 75% of OMC), the material may just...
A control-volume method for analysis of unsteady thrust augmenting ejector flows
Drummond, Colin K.
1988-01-01
A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.
Viscous liquid sloshing damping in cylindrical container using a volume of fluid method
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Liquid sloshing is a kind of very complicated free surface flow and exists widely in many fields.In order to calculate liquid sloshing damping precisely a volume of fluid method based on finite volume scheme is used to simulate free surface flows in partly filled cylindrical containers.A numerical method is pre-sented to simulate the movement of the free surface flow,in which a piecewise linear interface con-struction scheme and an unsplit Lagrangian advection scheme instead of Eulerian advection scheme are used.The damping performance of liquid sloshing in cylindrical containers under fundamental sloshing mode is investigated.There are four factors determining the surface-wave damping:free surface,boundary-layer,interior fluid and contact line.In order to study different contributions from these four factors to whole damping,several examples are simulated.No-slip and slip wall boundary conditions on both side wall and bottom wall of the cylindrical containers are studied to compare with the published results obtained by solving Stokes equations.In the present method the first three main factors can be considered.The simulation results show that the boundary-layer damping contribution increases while the interior fluid damping contribution decreases with increase of Reynolds number.
Viscous liquid sloshing damping in cylindrical container using a volume of fluid method
Institute of Scientific and Technical Information of China (English)
YANG Wei; LIU ShuHong; LIN Hong
2009-01-01
Liquid sloshing is a kind of very complicated free surface flow and exists widely in many fields. In order to calculate liquid sloshing damping precisely a volume of fluid method based on finite volume scheme is used to simulate free surface flows in partly filled cylindrical containers. A numerical method is pre-sented to simulate the movement of the free surface flow, in which a piecewise linear interface con-struction scheme and an unsplit Lagrangian advection scheme instead of Eulerian advection scheme are used. The damping performance of liquid sloshing in cylindrical containers under fundamental sloshing mode is investigated. There are four factors determining the surface-wave damping: free surface, boundary-layer, interior fluid and contact line. In order to study different contributions from these four factors to whole damping, several examples ere simulated. No-slip and slip wall boundary conditions on both side wall and bottom wall of the cylindrical containers are studied to compare with the published results obtained by solving Stokes equations. In the present method the first three main factors can be considered. The simulation results show that the boundary-layer damping contribution increases while the interior fluid damping contribution decreases with increase of Reynolds number.
Flux-splitting finite volume method for turbine flow and heat transfer analysis
Xu, C.; Amano, R. S.
A novel numerical method was developed to deal with the flow and heat transfer in a turbine cascade at both design and off-design conditions. The Navier-Stokes equations are discretized and integrated in a coupled manner. In the present method a time-marching scheme was employed along with the time-integration approach. The flux terms are discretized based on a cell finite volume formulation as well as a flux-difference splitting. The flux-difference splitting makes the scheme rapid convergence and the finite volume technique ensure the governing equations for the conservation of mass, momentum and energy. A hybrid difference scheme for quasi-three-dimensional procedure based on the discretized and integrated Navier-Stokes equations was incorporated in the code. The numerical method possesses the positive features of the explicit and implicit algorithms which provide a rapid convergence process and have a less stability constraint. The computed results were compared with other numerical studies and experimental data. The comparisons showed fairly good agreement with experiments.
Ling, Lei; Chung, Pei-Lun; Youker, Amanda; Stepinski, Dominique C; Vandegrift, George F; Wang, Nien-Hwa Linda
2013-09-27
Molybdenum-99 (Mo-99), generated from the fission of Uranium-235 (U-235), is the radioactive parent of the most widely used medical isotope, technetium-99m (Tc-99m). An efficient, robust, low-pressure process is developed for recovering Mo-99 from uranyl sulfate solutions. The minimum column volume and the maximum column length for required yield, pressure limit, and loading time are determined using a new graphical method. The method is based on dimensionless groups and intrinsic adsorption and diffusion parameters, which are estimated using a small number of experiments and simulations. The design is tested with bench-scale experiments with titania columns. The results show a high capture yield and a high stripping yield (95±5%). The design can be adapted to changes in design constraints or the variations in feed concentration, feed volume, or material properties. The graph shows clearly how the column utilization is affected by the required yield, loading time, and pressure limit. The cost effectiveness of various sorbent candidates can be evaluated based on the intrinsic parameters. This method can be used more generally for designing other capture chromatography processes. Published by Elsevier B.V.
D.M.K.S. Kaulesar Sukul (D. M K S); P.Th. den Hoed (Pieter); T. Johannes (Tanja); R. van Dolder (R.); E. Benda (Eric)
1993-01-01
textabstractVolume changes can be measured either directly by water-displacement volumetry or by various indirect methods in which calculation of the volume is based on circumference measurements. The aim of the present study was to determine the most appropriate indirect method for lower leg volume
D.M.K.S. Kaulesar Sukul (D. M K S); P.Th. den Hoed (Pieter); T. Johannes (Tanja); R. van Dolder (R.); E. Benda (Eric)
1993-01-01
textabstractVolume changes can be measured either directly by water-displacement volumetry or by various indirect methods in which calculation of the volume is based on circumference measurements. The aim of the present study was to determine the most appropriate indirect method for lower leg volume
Beutler, Gerhard
2005-01-01
G. Beutler's Methods of Celestial Mechanics is a coherent textbook for students as well as an excellent reference for practitioners. Volume II is devoted to the applications and to the presentation of the program system CelestialMechanics. Three major areas of applications are covered: (1) Orbital and rotational motion of extended celestial bodies. The properties of the Earth-Moon system are developed from the simplest case (rigid bodies) to more general cases, including the rotation of an elastic Earth, the rotation of an Earth partly covered by oceans and surrounded by an atmosphere, and the rotation of an Earth composed of a liquid core and a rigid shell (Poincaré model). (2) Artificial Earth Satellites. The oblateness perturbation acting on a satellite and the exploitation of its properties in practice is discussed using simulation methods (CelestialMechanics) and (simplified) first order perturbation methods. The perturbations due to the higher-order terms of the Earth's gravitational potential and reso...
Methods to Increase the Robustness of Finite-Volume Flow Models in Thermodynamic Systems
Directory of Open Access Journals (Sweden)
Sylvain Quoilin
2014-03-01
Full Text Available This paper addresses the issues linked to simulation failures during integration in finite-volume flow models, especially those involving a two-phase state. This kind of model is particularly useful when modeling 1D heat exchangers or piping, e.g., in thermodynamic cycles involving a phase change. Issues, such as chattering or stiff systems, can lead to low simulation speed, instabilities and simulation failures. In the particular case of two-phase flow models, they are usually linked to a discontinuity in the density derivative between the liquid and two-phase zones. In this work, several methods to tackle numerical problems are developed, described, implemented and compared. In addition, methods available in the literature are also implemented and compared to the proposed approaches. Results suggest that the robustness of the models can be significantly increased with these different methods, at the price of a small increase of the error in the mass and energy balances.
Norman, Michael L; So, Geoffrey C; Harkness, Robsert P
2013-01-01
We describe an extension of the {\\em Enzo} code to enable the direct numerical simulation of inhomogeneous reionization in large cosmological volumes. By direct we mean all dynamical, radiative, and chemical properties are solved self-consistently on the same mesh, as opposed to a postprocessing approach which coarse-grains the radiative transfer. We do, however, employ a simple subgrid model for star formation, which we calibrate to observations. The numerical method presented is a modification of an earlier method presented in Reynolds et al. Radiation transport is done in the grey flux-limited diffusion (FLD) approximation, which is solved by implicit time integration split off from the gas energy and ionization equations, which are solved separately. This results in a faster and more robust scheme for cosmological applications compared to the earlier method. The FLD equation is solved using the {\\em hypre} optimally scalable geometric multigrid solver from LLNL. By treating the ionizing radiation as a gri...
Reduction of blurring in broadband volume holographic imaging using a deconvolution method
Lv, Yanlu; Zhang, Xuanxuan; Zhang, Dong; Zhang, Lin; Luo, Yuan; Luo, Jianwen
2016-01-01
Volume holographic imaging (VHI) is a promising biomedical imaging tool that can simultaneously provide multi-depth or multispectral information. When a VHI system is probed with a broadband source, the intensity spreads in the horizontal direction, causing degradation of the image contrast. We theoretically analyzed the reason of the horizontal intensity spread, and the analysis was validated by the simulation and experimental results of the broadband impulse response of the VHI system. We proposed a deconvolution method to reduce the horizontal intensity spread and increase the image contrast. Imaging experiments with three different objects, including bright field illuminated USAF test target and lung tissue specimen and fluorescent beads, were carried out to test the performance of the proposed method. The results demonstrated that the proposed method can significantly improve the horizontal contrast of the image acquire by broadband VHI system. PMID:27570703
Directory of Open Access Journals (Sweden)
M Safaei
2016-09-01
Full Text Available In the present study, first the turbulent natural convection and then laminar mixed convection of air flow was solved in a room and the calculated outcomes are compared with results of other scientists and after showing validation of calculations, aforementioned flow is solved as a turbulent mixed convection flow, using the valid turbulence models Standard k-ε, RNG k-ε and RSM. To solve governing differential equations for this flow, finite volume method was used. This method is a specific case of residual weighting method. The results show that at high Richardson Numbers, the flow is rather stationary at the center of the enclosure. Moreover, it is distinguished that when Richardson Number increases the maximum of local Nusselt decreases. Therefore, it can be said that less number of Richardson Number, more rate of heat transfer.
Gaussian moving averages and semimartingales
DEFF Research Database (Denmark)
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
An experimental study of rill sediment delivery in purple soil, using the volume-replacement method.
Huang, Yuhan; Chen, Xiaoyan; Luo, Banglin; Ding, Linqiao; Gong, Chunming
2015-01-01
Experimental studies provide a basis for understanding the mechanisms of rill erosion and can provide estimates for parameter values in physical models simulating the erosion process. In this study, we investigated sediment delivery during rill erosion in purple soil. We used the volume-replacement method to measure the volume of eroded soil and hence estimate the mass of eroded soil. A 12 m artificial rill was divided into the following sections: 0-0.5 m, 0.5-1 m, 1-2 m, 2-3 m, 3-4 m, 4-5 m, 5-6 m, 6-7 m, 7-8 m, 8-10 m, and 10-12 m. Erosion trials were conducted with three flow rates (2 L/min, 4 L/min, and 8 L/min) and five slope gradients (5°, 10°, 15°, 20°, and 25°). The eroded rill sections were refilled with water to measure the eroded volume in each section and subsequently calculate the eroded sediment mass. The cumulative sediment mass was used to compute the sediment concentration along the length of the rill. The results show that purple soil sediment concentration increases with rill length before eventually reaching a maximal value; that is, the rate of increase in sediment concentration is greatest at the rill inlet and then gradually slows. Steeper slopes and higher flow rates result in sediment concentration increasing more rapidly along the rill length and the maximum sediment concentration being reached at an earlier location in the rill. Slope gradient and flow rate both result in an increase in maximal sediment concentration and accumulated eroded amount. However, slope gradient has a greater influence on rill erosion than flow rate. The results and experimental method in this study may provide a reference for future rill-erosion experiments.
An experimental study of rill sediment delivery in purple soil, using the volume-replacement method
Directory of Open Access Journals (Sweden)
Yuhan Huang
2015-09-01
Full Text Available Experimental studies provide a basis for understanding the mechanisms of rill erosion and can provide estimates for parameter values in physical models simulating the erosion process. In this study, we investigated sediment delivery during rill erosion in purple soil. We used the volume-replacement method to measure the volume of eroded soil and hence estimate the mass of eroded soil. A 12 m artificial rill was divided into the following sections: 0–0.5 m, 0.5–1 m, 1–2 m, 2–3 m, 3–4 m, 4–5 m, 5–6 m, 6–7 m, 7–8 m, 8–10 m, and 10–12 m. Erosion trials were conducted with three flow rates (2 L/min, 4 L/min, and 8 L/min and five slope gradients (5°, 10°, 15°, 20°, and 25°. The eroded rill sections were refilled with water to measure the eroded volume in each section and subsequently calculate the eroded sediment mass. The cumulative sediment mass was used to compute the sediment concentration along the length of the rill. The results show that purple soil sediment concentration increases with rill length before eventually reaching a maximal value; that is, the rate of increase in sediment concentration is greatest at the rill inlet and then gradually slows. Steeper slopes and higher flow rates result in sediment concentration increasing more rapidly along the rill length and the maximum sediment concentration being reached at an earlier location in the rill. Slope gradient and flow rate both result in an increase in maximal sediment concentration and accumulated eroded amount. However, slope gradient has a greater influence on rill erosion than flow rate. The results and experimental method in this study may provide a reference for future rill-erosion experiments.
Institute of Scientific and Technical Information of China (English)
Fan Yuxin; Xia Jian
2014-01-01
A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute tran-sient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute infla-tion is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES) method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hil-ber–Hughes–Taylor (HHT) time integration method is employed. For the fluid dynamic simula-tions, the Roe and HLLC (Harten–Lax–van Leer contact) scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS) approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
Directory of Open Access Journals (Sweden)
Fan Yuxin
2014-12-01
Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
Gritti, Fabrice; Kazakevich, Yuri; Guiochon, Georges
2007-08-17
The hold-up volumes, V(M) of two series of RPLC adsorbents were measured using three different approaches. The first method is based on the difference between the volumes of the empty column tube (150x4.6mm) and of the material packed inside the column. It is considered as giving the correct value of V(M). This method combines the results of the BET characterization of the adsorbent before packing (giving the specific pore volume), of carbon element analysis (giving the mass fraction of silica and alkyl bonded chains), of Helium pycnometry (providing silica density), and of inverse size exclusion chromatography (ISEC) performed on the packed column (yielding the interparticle volume). The second method is static pycnometry, which consists in weighing the masses of the chromatographic column filled with two distinct solvents of different densities. The last method is based on the thermodynamic definition of the hold-up volume and uses the dynamic minor disturbance method (MDM) with binary eluents. The experimental results of these three non-destructive methods are compared. They exhibit significant, systematic differences. Pycnometry underestimates V(M) by a few percent for adsorbents having a high carbon content. The results of the MDM method depend strongly on the choice of the binary solution used and may underestimate or overestimate V(M). The hold-up volume V(M) of the RPLC adsorbents tested is best measured by the MDM method using a mixture of ethanol and water.
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Geophysical methods to investigate and survey unstable volumes along a cliff
Levy, Clara; Baillet, Laurent; Jongmans, Denis; Mourot, Philippe; Hantz, Didier
2010-05-01
We successively instrumented 2 unstable sites along the 300 m high Urgonian cliff of the southern Vercors massif, French Alps. The first site, a rock column of 21000 m3, collapsed in November 2007, 5 months after the beginning of measurements. The experiment showed that information contained in seismic noise can be used for hazard assessment when considering the potential failure of an overhanging rock column. Indeed, the study of seismic noise recorded prior the rock fall revealed that low resonance frequencies follow a precursory pattern, as they decrease significantly, from 3.4 Hz to 2.6 Hz, before the collapse. We successfully reproduced this phenomenon with 2D numerical modelling of rock falls. Numerical simulation results pointed out that this decrease depends on the column-to-mass contact stiffness, which is controlled by the remaining rock bridges. Impulsive signals, which could be attributed to rock fracturing, have also been studied. P and S waves were identified for 40 events, allowing wave polarisation analysis and preliminary event location. Seismic sources able to trigger the vibration of the rock column were located along the broken plane and probably resulted from micro-cracks along rock bridges. From this first site study, we tried to closely follow the evolution of the natural frequencies at the second site, which also consists of a rock column decoupling from the mass with an open fracture in the rear. The value of the first eigenfrequency (about 7.6 Hz in June 2008) shows that the unstable volume is probably much smaller than for the first site. This evaluation is consistent with the estimated volume using DEM derived from LIDAR scan (about 1000 m3). A detailed investigation of the first eigenfrequency shows that its variation is also correlated with temperature and frost. After one year of a rough stability, the average value of the first eigenfrequency clearly shows a drift with the temperature variation pattern and an irreversible decrease of
A GIS-based method to determine the volume of lahars: Popocatépetl volcano, Mexico
Muñoz-Salinas, E.; Renschler, C. S.; Palacios, D.
2009-10-01
Lahars are flows composed of water and volcanic sediment which are often dangerous for people living near volcanoes. Therefore, a reliable estimation of lahar volume is needed to effectively assess the risk. This paper proposes a new method to calculate the volume of lahar sediments found in channels of volcanic landscapes. The method requires surveys of several cross-sections along a gorge, a Digital Elevation Model of the study area and measurements of the thickness of the lahar deposits. With these data and a Geographical Information System (GIS), the volume is calculated for the erosive section where deposit volume is divided into oblique parallelepipeds and the sedimentary section where deposit volume is divided into polyhedrons. This new method was applied to the 1997 and 2001 lahars that occurred in the channel of a gorge at Popocatépetl volcano, Mexico. The estimated volumes are 1.85 × 10 5 and 1.6 × 10 5 m 3, respectively, which is about 40% less than those obtained by the traditional method that multiplies lahar flow-path length, sediment width and sediment depth. This observation suggests that the traditional method tends to overestimate volumes.
Institute of Scientific and Technical Information of China (English)
王同科
2002-01-01
In this paper, a high accuracy finite volume element method is presented for two-point boundary value problem of second order ordinary differential equation, which differs fromthe high order generalized difference methods. It is proved that the method has optimal order er-ror estimate O(h3) in H1 norm. Finally, two examples show that the method is effective.
Qin, J. J.; Jones, M.; Shiota, T.; Greenberg, N. L.; Firstenberg, M. S.; Tsujino, H.; Zetts, A. D.; Sun, J. P.; Cardon, L. A.; Odabashian, J. A.; Flamm, S. D.; White, R. D.; Panza, J. A.; Thomas, J. D.
2000-01-01
AIM: The aim of this study was to investigate the feasibility and accuracy of using symmetrically rotated apical long axis planes for the determination of left ventricular (LV) volumes with real-time three-dimensional echocardiography (3DE). METHODS AND RESULTS: Real-time 3DE was performed in six sheep during 24 haemodynamic conditions with electromagnetic flow measurements (EM), and in 29 patients with magnetic resonance imaging measurements (MRI). LV volumes were calculated by Simpson's rule with five 3DE methods (i.e. apical biplane, four-plane, six-plane, nine-plane (in which the angle between each long axis plane was 90 degrees, 45 degrees, 30 degrees or 20 degrees, respectively) and standard short axis views (SAX)). Real-time 3DE correlated well with EM for LV stroke volumes in animals (r=0.68-0.95) and with MRI for absolute volumes in patients (r-values=0.93-0.98). However, agreement between MRI and apical nine-plane, six-plane, and SAX methods in patients was better than those with apical four-plane and bi-plane methods (mean difference = -15, -18, -13, vs. -31 and -48 ml for end-diastolic volume, respectively, Pmeasurement methods of real-time 3DE correlated well with reference standards for calculating LV volumes. Balancing accuracy and required time for these LV volume measurements, the apical six-plane method is recommended for clinical use.
Kim, Euitae; Shidahara, Miho; Tsoumpas, Charalampos; McGinnity, Colm J; Kwon, Jun Soo; Howes, Oliver D; Turkheimer, Federico E
2013-06-01
We validated the use of a novel image-based method for partial volume correction (PVC), structural-functional synergistic resolution recovery (SFS-RR) for the accurate quantification of dopamine synthesis capacity measured using [(18)F]DOPA positron emission tomography. The bias and reliability of SFS-RR were compared with the geometric transfer matrix (GTM) method. Both methodologies were applied to the parametric maps of [(18)F]DOPA utilization rates (ki(cer)). Validation was first performed by measuring repeatability on test-retest scans. The precision of the methodologies instead was quantified using simulated [(18)F]DOPA images. The sensitivity to the misspecification of the full-width-half-maximum (FWHM) of the scanner point-spread-function on both approaches was also assessed. In the in-vivo data, the ki(cer) was significantly increased by application of both PVC procedures while the reliability remained high (intraclass correlation coefficients >0.85). The variability was not significantly affected by either PVC approach (<10% variability in both cases). The corrected ki(cer) was significantly influenced by the FWHM applied in both the acquired and simulated data. This study shows that SFS-RR can effectively correct for partial volume effects to a comparable degree to GTM but with the added advantage that it enables voxelwise analyses, and that the FWHM used can affect the PVC result indicating the importance of accurately calibrating the FWHM used in the recovery model.
Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique
Directory of Open Access Journals (Sweden)
Jahedul Islam Chowdhury
2015-12-01
Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.
Measuring tree height and preparation volume table using an innovative method.
Lotfalian, Majid; Nouri, Zahra; Kooch, Yahya; Zobeiri, Mahmoud
2007-10-15
Zarbin (Cupressus sempervirence var. horizontalis) with its unique characteristics is one of the worthiest species which can be found in the central area of Alborz in the North of Iran. Especially in the Roodbar-manjil area, Chaloos-Hassanabad valley as well as it extends from Zarringol area to Gorgan. Although the distribution areas of this species have been protected, these forests have been invaded by the villagers who use this useful wood. For this reason in the Roodbar area, trees with DBH>30 cm are extremely rare. To recognize and to be aware of the stand quantity, the current research tries to calculate the species volume table in Roodbar area, to be the basis for any calculation of the volume of stand in the region. For this purpose, trees have been sampled using the line sampling method. After estimating the form factor, Tarif table have been prepared. In this study, a new method for measuring tree height is presented, in which, instead of measuring slope distance from observer to tree (which is difficult in young conifers because of existence branches in lower height) distance between the eye level of observer to tree butt is measured. Which doing of it is easier, time of field work is decreased and accuracy of measurement and calculation is increased.
Comparing volume of fluid and level set methods for evaporating liquid-gas flows
Palmore, John; Desjardins, Olivier
2016-11-01
This presentation demonstrates three numerical strategies for simulating liquid-gas flows undergoing evaporation. The practical aim of this work is to choose a framework capable of simulating the combustion of liquid fuels in an internal combustion engine. Each framework is analyzed with respect to its accuracy and computational cost. All simulations are performed using a conservative, finite volume code for simulating reacting, multiphase flows under the low-Mach assumption. The strategies used in this study correspond to different methods for tracking the liquid-gas interface and handling the transport of the discontinuous momentum and vapor mass fractions fields. The first two strategies are based on conservative, geometric volume of fluid schemes using directionally split and un-split advection, respectively. The third strategy is the accurate conservative level set method. For all strategies, special attention is given to ensuring the consistency between the fluxes of mass, momentum, and vapor fractions. The study performs three-dimensional simulations of an isolated droplet of a single component fuel evaporating into air. Evaporation rates and vapor mass fractions are compared to analytical results.
An implicit control-volume finite element method for well-reservoir modelling
Pavlidis, Dimitrios; Salinas, Pablo; Xie, Zhihua; Pain, Christopher; Matar, Omar
2016-11-01
Here a novel implicit approach (embodied within the IC-Ferst) is presented for modelling wells with potentially a large number of laterals within reservoirs. IC-Ferst is a conservative and consistent, control-volume finite element method (CV-FEM) model and fully unstructured/geology conforming meshes with anisotropic mesh adaptivity. As far as the wells are concerned, a multi-phase/multi-well approach, where well systems are represented as phases, is taken here. Phase volume fraction conservation equations are solved for in both the reservoir and the wells, in addition, the field within wells is also solved for. A second novel aspect of the work is the combination of modelling and resolving of the motherbore and laterals. In this case wells do not have to be explicitly discretised in space. This combination proves to be accurate (in many situations) as well as computationally efficient. The method is applied to a number of multi-phase reservoir problems in order to gain an insight into the effectiveness, in terms of production rate, of perforated laterals. Model results are compared with semi-analytical solutions for simple cases and industry-standard codes for more complicated cases. EPSRC UK Programme Grant MEMPHIS (EP/K003976/1).
Frohlich, Clifford A.
1992-11-01
When seismic events occur in spatially compact clusters, the volume and geometric characteristics of these clusters often provides information about the relative effectiveness of different location methods, or about physical processes occurring within the hypocentral region. This report defines and explains how to determine the convex polyhedron of minimum volume (CPMV) surrounding a set of points. We evaluate both single-event and joint hypocenter determination (JHD) relocations for three rather different clusters of seismic events: (1) nuclear explosions from Muroroa relocated using P and PKP phases reported by the ISC; (2) intermediate depth earthquakes near Bucaramanga, Colombia, relocated using P and PKP phases reported by the ISC; and (3) shallow earthquakes near Vanuatu (formerly, the New Hebrides), relocated using P and S phases from a local station network. This analysis demonstrates that different location methods markedly affect the volume of the CPMV, however, volumes for JHD relations are not always smaller than volumes for single-event relocations.
A finite-volume numerical method to calculate fluid forces and rotordynamic coefficients in seals
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1992-01-01
A numerical method to calculate rotordynamic coefficients of seals is presented. The flow in a seal is solved by using a finite-volume formulation of the full Navier-Stokes equations with appropriate turbulence models. The seal rotor is perturbed along a diameter such that the position of the rotor is a sinusoidal function of time. The resulting flow domain changes with time, and the time-dependent flow in the seal is solved using a space conserving moving grid formulation. The time-varying fluid pressure reaction forces are then linked with the rotor center displacement, velocity and acceleration to yield the rotordynamic coefficients. Results for an annular seal are presented, and compared with experimental data and other more simplified numerical methods.
Technical Report: Modeling of Composite Piezoelectric Structures with the Finite Volume Method
Bolborici, Valentin; Pugh, Mary C
2011-01-01
Piezoelectric devices, such as piezoelectric traveling wave rotary ultrasonic motors, have composite piezoelectric structures. A composite piezoelectric structure consists of a combination of two or more bonded materials, where at least one of them is a piezoelectric transducer. Numerical modeling of piezoelectric structures has been done in the past mainly with the finite element method. Alternatively, a finite volume based approach offers the following advantages: (a) the ordinary differential equations resulting from the discretization process can be interpreted directly as corresponding circuits and (b) phenomena occurring at boundaries can be treated exactly. This report extends the work of IEEE Transactions on UFFC 57(2010)7:1673-1691 by presenting a method for implementing the boundary conditions between the bonded materials in composite piezoelectric structures. The report concludes with one modeling example of a unimorph structure.
Gas permeation measurement under defined humidity via constant volume/variable pressure method
Jan Roman, Pauls
2012-02-01
Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.
Precision of a new bedside method for estimation of the circulating blood volume
DEFF Research Database (Denmark)
Christensen, P; Eriksen, B; Henneberg, S W
1993-01-01
The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before...... and after ventilation with a gas mixture containing 20-50 ml of CO for a period of 10-15 min. A special Water's to and fro system was designed in order to avoid any leakage when measuring during intermittent positive pressure ventilation (IPPV). Blood samples were taken before and immediately after......, determination of CBV can be performed with an amount of CO that gives rise to a harmless increase in the carboxyhemoglobin concentration.(ABSTRACT TRUNCATED AT 250 WORDS)...
Energy Technology Data Exchange (ETDEWEB)
Smith, R.J.; Karp, J.S. [Univ. of Pennsylvania, Philadelphia, PA (United States). Dept. of Radiology
1996-06-01
Randoms subtraction in a volume imaging PET scanner is a significant problem due to the high singles countrates experienced. The delayed coincidence method requires double counting of randoms events and results in a lowered countrate capability. Calculations based on detector singles countrates require complex corrections for countrate dependent livetime and event acceptance due to the camera coincidence processing between the detector and rebinned randoms countrates. The profile distribution method has been used to estimate and subtract both scatter and randoms background but this method is a compromise and couples these 2 sources of background together. In order to avoid these problems and provide accurate subtraction of both the distribution and magnitude of randoms contamination in the scan data the authors have developed an alternative singles based method. The singles distributions are measured across the detectors and are used to construct a randoms distribution sinogram. This distribution is scaled to the appropriate rebinned randoms countrate by means of a lookup table of randoms countrate vs detector singles countrate, generated from phantom calibrations. The advantages of performing randoms subtraction by this method are: (1) there is no increase in camera deadtime, (2) the method compensates for nonuniformities in randoms distributions due to both the activity distribution and nonuniform geometric response of the camera for on and off bankpairs, and (3) it deals with randoms subtraction independently of scatter so that different scatter correction routines may then be applied to the data.
Gross, M. R.; Manda, A. K.
2004-12-01
Karst limestones are characterized by solution-enhanced macropores and conduits that lead to exceptional heterogeneity at the aquifer scale. The interconnected network of solution cavities often results in a conduit flow regime that bypasses the less permeable rock matrix. Efforts to manage and protect karst aquifers, which are vital water resources in many parts of the world, will benefit from meaningful characterizations of the heterogeneity inherent in these formations. To this end, we propose a new method to estimate the representative elementary volume (REV) for macroporosity within karst aquifers using techniques borrowed from remote sensing and geospatial analysis. The REV represents a sampling window in which numerous measurements of a highly-variable property (e.g., porosity, hydraulic conductivity) can be averaged into a single representative value of statistical and physical significance. High-resolution borehole images are classified into binary images consisting of pixels designated as either rock matrix or pore space. A two-dimensional porosity is calculated by summing the total area occupied by pores within a rectangular sampling window placed over the binary image. Small sampling windows quantify the heterogeneous nature of porosity distribution in the aquifer, whereas large windows provide an estimate of overall porosity. Applying this procedure to imagery taken from the Biscayne aquifer of south Florida yields a macroporosity of ~40%, considerably higher than the ~28% porosity measured from recovered core samples. Geospatial analysis may provide the more reliable estimate because it incorporates large solution cavities and conduits captured by the borehole image. The REV is estimated by varying the size of sampling windows around prominent conduits and evaluating the change in porosity as a function of window size. Average porosities decrease systematically with increasing sampling size, eventually converging to a constant value and thus
The event-driven constant volume method for particle coagulation dynamics
Institute of Scientific and Technical Information of China (English)
2008-01-01
Monte Carlo (MC) method, which tracks small numbers of the dispersed simulation parti- cles and then describes the dynamic evolution of large numbers of real particles, consti- tutes an important class of methods for the numerical solution of population balance modeling. Particle coagulation dynamics is a complex task for MC. Event-driven MC ex- hibits higher accuracy and efficiency than time-driven MC on the whole. However, these available event-driven MCs track the "equally weighted simulation particle population" and maintain the number of simulated particles within bounds at the cost of "regulating" com- putational domain, which results in some constraints and drawbacks. This study designed the procedure of "differently weighted fictitious particle population" and the corresponding coagulation rule for differently weighted fictitious particles. And then, a new event-driven MC method was promoted to describe the coagulation dynamics between differently weighted fictitious particles, where "constant number scheme" and "stepwise constant number scheme" were developed to maintain the number of fictitious particles within bounds as well as the constant computational domain. The MC is named event-driven constant volume (EDCV) method. The quantitative comparison among several popular MCs shows that the EDCV method has the advantages of computational precision and computational efficiency over other available MCs.
The event-driven constant volume method for particle coagulation dynamics
Institute of Scientific and Technical Information of China (English)
ZHAO HaiBo; ZHENG ChuGuang
2008-01-01
Monte Carlo (MC) method, which tracks small numbers of the dispersed simulation parti-cles and then describes the dynamic evolution of large numbers of real particles, consti-tutes an important class of methods for the numerical solution of population balance modeling. Particle coagulation dynamics is a complex task for MC. Event-driven MC ex-hibits higher accuracy and efficiency than time-driven MC on the whole. However, these available event-driven MCs track the "equally weighted simulation particle population" and maintain the number of simulated particles within bounds at the cost of "regulating" com-putational domain, which results in some constraints and drawbacks. This study designed the procedure of "differently weighted fictitious particle population" and the corresponding coagulation rule for differently weighted fictitious particles. And then, a new event-driven MC method was promoted to describe the coagulation dynamics between differently weighted fictitious particles, where "constant number scheme" and "stepwise constant number scheme" were developed to maintain the number of fictitious particles within bounds as well as the constant computational domain. The MC is named event-driven constant volume (EDCV) method. The quantitative comparison among several popular MCs shows that the EDCV method has the advantages of computational precision and computational efficiency over other available MCs.
Turco, Dario; Busutti, Marco; Mignani, Renzo; Magistroni, Riccardo; Corsi, Cristiana
2017-01-01
In recent times, the scientific community has been showing increasing interest in the treatments aimed at slowing the progression of the autosomal dominant polycystic kidney disease (ADPKD). Therefore, in this paper, we test and evaluate the performance of several available methods for total kidney volume (TKV) computation in ADPKD patients - from echography to MRI - in order to optimize patient classification. Two methods based on geometric assumptions (mid-slice [MS], ellipsoid [EL]) and a third one on true contour detection were tested on 40 ADPKD patients at different disease stage using MRI. The EL method was also tested using ultrasound images in a subset of 14 patients. Their performance was compared against TKVs derived from reference manual segmentation of MR images. Patient clinical classification was also performed based on computed volumes. Kidney volumes derived from echography significantly underestimated reference volumes. Geometric-based methods applied to MR images had similar acceptable results. The highly automated method showed better performance. Volume assessment was accurate and reproducible. Importantly, classification resulted in 79, 13, 10, and 2.5% of misclassification using kidney volumes obtained from echo and MRI applying the EL, the MS and the highly automated method respectively. Considering the fact that the image-based technique is the only approach providing a 3D patient-specific kidney model and allowing further analysis including cyst volume computation and monitoring disease progression, we suggest that geometric assumption (e.g., EL method) should be avoided. The contour-detection approach should be used for a reproducible and precise morphologic classification of the renal volume of ADPKD patients. © 2017 S. Karger AG, Basel.
Modification of averaging process in GR: Case study flat LTB
Khosravi, Shahram; Mansouri, Reza
2007-01-01
We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.
Institute of Scientific and Technical Information of China (English)
莫则尧
2001-01-01
A Multilevel Averaging Weight (MAW) dynamic load balancing method suitable for both synchronous and heterogeneous parallel computing environments is presented in this paper to solve the one-dimensional dynamic load imbalance problems arising from the parallel Lagrange numerical simulation of multiple matters non-steady fluid dynamics. At first, a one-dimensional load imbalance model is designed to simplify the theoretical analysis for the robustness of MAW method. For this model, the defined domain is uniformly differenced into grid cells, and every grid cells is assumed to be processed with different CPU time. Given P processors, it is required to find the efficient domain decomposition strategy which can keep the loads among different subdomains assigned to individual processors balanced. Secondly, we present a load balancing method, Averaging Weight (AW) method. The theoretical analysis shows that, while the number the processors is equal to 2, AW method is very efficient to adjust the system form a very imbalanced state to a very balanced state in 2—4 iterations. It is unfortunately that this conclusion can not be generalized to be suitable for larger number of processors. In further, inherited form the idea of the AW method, we designed another load balancing method, Multilevel Averaging Weight method. The similar theoretical analysis shows that this method can adjust the load to be balanced in ClogP iterations for any number of processors, where P is the number of processors and C is the iterations using AW method while P=2. This result is usually enough to efficiently track fluctuations in the load imbalance as the parallel numerical simulation progresses. Moreover, both AW and MAW method are all suitable for both homogeneous and heterogeneous parallel computing environments. Thirdly, we organize the numerical experiments for three types of load balancing models, and gain the same conclusions coincided with that of the theoretical analysis. At last, we
Barak, C; Leviatan, Y; Inbar, G F; Hoekstein, K N
1992-09-01
Using the electrical impedance measurement technique to investigate stroke volume estimation, three models of the ventricle were simulated. A four-electrode impedance catheter was used; two electrodes to set up an electric field in the model and the other two to measure the potential difference. A new approach, itself an application of the quasi-static case of a method used to solve electromagnetic field problems, was used to solve the electric field in the model. The behaviour of the estimation is examined with respect to the electrode configuration on the catheter and to catheter location with respect to the ventricle walls. Cardiac stroke volume estimation was found to be robust to catheter location generating a 10 per cent error for an offset of 40 per cent of the catheter from the chamber axis and rotation of 20 degrees with respect to the axis. The electrode configuration has a dominant effect on the sensitivity and accuracy of the estimation. Certain configurations gave high accuracy, whereas in others high sensitivity was found with lower accuracy. This led to the conclusion that the electrode configuration should be carefully chosen according to the desired criteria.
A new method for volume segmentation of PET images, based on possibility theory.
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Lopes, Renaud; Huglo, Damien; Stute, Simon; Vermandel, Maximilien
2011-02-01
18F-fluorodeoxyglucose positron emission tomography (18FDG PET) has become an essential technique in oncology. Accurate segmentation and uptake quantification are crucial in order to enable objective follow-up, the optimization of radiotherapy planning, and therapeutic evaluation. We have designed and evaluated a new, nearly automatic and operator-independent segmentation approach. This incorporated possibility theory, in order to take into account the uncertainty and inaccuracy inherent in the image. The approach remained independent of PET facilities since it did not require any preliminary calibration. Good results were obtained from phantom images [percent error =18.38% (mean) ± 9.72% (standard deviation)]. Results on simulated and anatomopathological data sets were quantified using different similarity measures and showed the method was efficient (simulated images: Dice index =82.18% ± 13.53% for SUV =2.5 ). The approach could, therefore, be an efficient and robust tool for uptake volume segmentation, and lead to new indicators for measuring volume of interest activity.
Advanced Methods for Robot-Environment Interaction towards an Industrial Robot Aware of Its Volume
Directory of Open Access Journals (Sweden)
Fabrizio Romanelli
2011-01-01
Full Text Available A fundamental aspect of robot-environment interaction in industrial environments is given by the capability of the control system to model the structured and unstructured environment features. Industrial robots have to perform complex tasks at high speeds and have to satisfy hard cycle times while maintaining the operations extremely precise. The capability of the robot to perceive the presence of environmental objects is something still missing in the real industrial context. Although anthropomorphic robot producers have faced problems related to the interaction between robot and its environment, there is not an exhaustive study on the capabilities of the robot being aware of its volume and on the tools eventually mounted on its flange. In this paper, a solution to model the environment of the robot in order to make it capable of perceiving and avoiding collisions with the objects in its surroundings is shown. Furthermore, the model will be extended to take also into account the volume of the robot tool in order to extend the perception capabilities of the entire system. Testing results will be showed in order to validate the method, proving that the system is able to cope with complex real surroundings.
SU-E-J-35: Using CBCT as the Alternative Method of Assessing ITV Volume
Energy Technology Data Exchange (ETDEWEB)
Liao, Y; Turian, J; Templeton, A; Redler, G; Chu, J [Rush University Medical Center, Chicago, IL (United States)
2015-06-15
Purpose To study the accuracy of Internal Target Volumes (ITVs) created on cone beam CT (CBCT) by comparing the visible target volume on CBCT to volumes (GTV, ITV, and PTV) outlined on free breathing (FB) CT and 4DCT. Methods A Quasar Cylindrical Motion Phantom with a 3cm diameter ball (14.14 cc) embedded within a cork insert was set up to simulate respiratory motion with a period of 4 seconds and amplitude of 2cm superioinferiorly and 1cm anterioposteriorly. FBCT and 4DCT images were acquired. A PTV-4D was created on the 4DCT by applying a uniform margin of 5mm to the ITV-CT. PTV-FB was created by applying a margin of the motion range plus 5mm, i.e. total of 1.5cm laterally and 2.5cm superioinferiorly to the GTV outlined on the FBCT. A dynamic conformal arc was planned to treat the PTV-FB with 1mm margin. A CBCT was acquired before the treatment, on which the target was delineated. During the treatment, the position of the target was monitored using the EPID in cine mode. Results ITV-CBCT and ITV-CT were measured to be 56.6 and 62.7cc, respectively, with a Dice Coefficient (DC) of 0.94 and disagreement in center of mass (COM) of 0.59 mm. On the other hand, GTV-FB was 11.47cc, 19% less than the known volume of the ball. PTV-FB and PTV-4D were 149 and 116 cc, with a DC of 0.71. Part of the ITV-CT was not enclosed by the PTV-FB despite the large margin. The cine EPID images have confirmed geometrical misses of the target. Similar under-coverage was observed in one clinical case and captured by the CBCT, where the implanted fiducials moved outside PTV-FB. Conclusion ITV-CBCT is in good agreement with ITV-CT. When 4DCT was not available, CBCT can be an effective alternative in determining and verifying the PTV margin.
Pathak, Ashish; Raessi, Mehdi
2016-02-01
We introduce a piecewise-linear, volume-of-fluid method for reconstructing and advecting three-dimensional interfaces and contact lines formed by three materials. The new method employs a set of geometric constructs that can be used in conjunction with any volume-tracking scheme. In this work, we used the mass-conserving scheme of Youngs to handle two-material cells, perform interface reconstruction in three-material cells, and resolve the contact line. The only information required by the method is the available volume fraction field. Although the proposed method is order dependent and requires a priori information on material ordering, it is suitable for typical contact line applications, where the material representing the contact surface is always known. Following the reconstruction of the contact surface, to compute the interface orientation in a three-material cell, the proposed method minimizes an error function that is based on volume fraction distribution around that cell. As an option, the minimization procedure also allows the user to impose a contact angle. Performance of the proposed method is assessed via both static and advection test cases. The tests show that the new method preserves the accuracy and mass-conserving property of the Youngs method in volume-tracking three materials.
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Coupling of Smoothed Particle Hydrodynamics with Finite Volume method for free-surface flows
Marrone, S.; Di Mascio, A.; Le Touzé, D.
2016-04-01
A new algorithm for the solution of free surface flows with large front deformation and fragmentation is presented. The algorithm is obtained by coupling a classical Finite Volume (FV) approach, that discretizes the Navier-Stokes equations on a block structured Eulerian grid, with an approach based on the Smoothed Particle Hydrodynamics (SPH) method, implemented in a Lagrangian framework. The coupling procedure is formulated in such a way that each solver is applied in the region where its intrinsic characteristics can be exploited in the most efficient and accurate way: the FV solver is used to resolve the bulk flow and the wall regions, whereas the SPH solver is implemented in the free surface region to capture details of the front evolution. The reported results clearly prove that the combined use of the two solvers is convenient from the point of view of both accuracy and computing time.
A solution of two-dimensional magnetohydrodynamic flow using the finite volume method
Directory of Open Access Journals (Sweden)
Naceur Sonia
2014-01-01
Full Text Available This paper presents the two dimensional numerical modeling of the coupling electromagnetic-hydrodynamic phenomena in a conduction MHD pump using the Finite volume Method. Magnetohydrodynamic problems are, thus, interdisciplinary and coupled, since the effect of the velocity field appears in the magnetic transport equations, and the interaction between the electric current and the magnetic field appears in the momentum transport equations. The resolution of the Maxwell's and Navier Stokes equations is obtained by introducing the magnetic vector potential A, the vorticity z and the stream function y. The flux density, the electromagnetic force, and the velocity are graphically presented. Also, the simulation results agree with those obtained by Ansys Workbench Fluent software.
Cerroni, D.; Fancellu, L.; Manservisi, S.; Menghini, F.
2016-06-01
In this work we propose to study the behavior of a solid elastic object that interacts with a multiphase flow. Fluid structure interaction and multiphase problems are of great interest in engineering and science because of many potential applications. The study of this interaction by coupling a fluid structure interaction (FSI) solver with a multiphase problem could open a large range of possibilities in the investigation of realistic problems. We use a FSI solver based on a monolithic approach, while the two-phase interface advection and reconstruction is computed in the framework of a Volume of Fluid method which is one of the more popular algorithms for two-phase flow problems. The coupling between the FSI and VOF algorithm is efficiently handled with the use of MEDMEM libraries implemented in the computational platform Salome. The numerical results of a dam break problem over a deformable solid are reported in order to show the robustness and stability of this numerical approach.
Directory of Open Access Journals (Sweden)
Carlos Salinas
2011-05-01
Full Text Available The work was aimed at simulating two-dimensional wood drying stress using the control-volume finite element method (CVFEM. Stress/strain was modeled by moisture content gradients regarding shrinkage and mechanical sorption in a cross-section of wood. CVFEM was implemented with triangular finite elements and lineal interpolation of the independent variable which were programmed in Fortran 90 language. The model was validated by contrasting results with similar ones available in the specialised literature. The present model’s results came from isothermal (20ºC drying of quaking aspen (Populus tremuloides: two-dimensional distribution of stress/strain and water content, 40, 80, 130, 190 and 260 hour drying time and evolution of normal stress (2.5 <σ͓ ͓ < 1.2, MPa, from the interior to the exterior of wood.
Rakhmangulov, Aleksandr; Muravev, Dmitri; Mishkurov, Pavel
2016-11-01
The issue of operative data reception on location and movement of railcars is significant the constantly growing requirements of the provision of timely and safe transportation. The technical solution for efficiency improvement of data collection on rail rolling stock is the implementation of an identification system. Nowadays, there are several such systems, distinguished in working principle. In the authors' opinion, the most promising for rail transportation is the RFID technology, proposing the equipping of the railway tracks by the stationary points of data reading (RFID readers) from the onboard sensors on the railcars. However, regardless of a specific type and manufacturer of these systems, their implementation is affiliated with the significant financing costs for large, industrial, rail transport systems, owning the extensive network of special railway tracks with a large number of stations and loading areas. To reduce the investment costs for creation, the identification system of rolling stock on the special railway tracks of industrial enterprises has developed the method based on the idea of priority installation of the RFID readers on railway hauls, where rail traffic volumes are uneven in structure and power, parameters of which is difficult or impossible to predict on the basis of existing data in an information system. To select the optimal locations of RFID readers, the mathematical model of the staged installation of such readers has developed depending on the non-uniformity value of rail traffic volumes, passing through the specific railway hauls. As a result of that approach, installation of the numerous RFID readers at all station tracks and loading areas of industrial railway stations might be not necessary,which reduces the total cost of the rolling stock identification and the implementation of the method for optimal management of transportation process.
Comparison of loess and purple rill erosions measured with volume replacement method
Chen, Xiao-yan; Huang, Yu-han; Zhao, Yu; Mo, Bin; Mi, Hong-xing
2015-11-01
Rills are commonly found on sloping farm fields in both the loess and the purple soil regions of China. A comparative study on rill erosion between the two soils is important to increase research knowledge and exchange application experiences. Rill erosion processes of loess and purple soils were determined through laboratory experiments with the volume replacement method. Water was used to refill the eroded rill segments to compute eroded volume before sediment concentration distribution along the rill was computed using the soil bulk density, flow rate, and water flow duration. The experimental loess soil materials were from the Loess Plateau and purple soil from the southwestern part of China, Chongqing City. A laboratory experimental platform was used to construct flumes to simulate rills with 12.0 m length, 0.1 m width, and 0.3 m depth. Soil materials were filled into the flumes at a bulk density of 1.2 g cm-3 to a depth of 20 cm to form rills for experiments on five slope gradients (5°, 10°, 15°, 20°, and 25°) and three flow rates (2, 4, and 8 L/min). After each experimental run under the given slope gradient and flow rate, the rill segments from the upper slope between 0-0.5, 0.5-1, 1-2, 2-3, …, 7-8, 8-10, and 10-12 m were lined with plastic sheets before be re-filled with water to determine sediment concentration after the eroded volumes was measured. Rill erosion differed between the two soils. As purple soil started to erode at a higher erosive force than loess soil, it possibly exhibits higher resistance to water erosion. The subsequent erosion process in the eroding purple rill was similar to that in the loess rill. However, the total erosion in the eroding loess rill was more than that in the eroding purple rill. The maximum sediment concentration transported by the eroding purple rills was significantly lower, approximately 55% of those transported by the loess rills under the same flow rate and slope gradient. Hence, less purple sediments can
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper, the feasibility of measuring the gas volume fraction in a mixed gas-liquid flow by using an acoustic resonant spectroscopy (ARS) method in a transient way is studied theoretically and experimentally. Firstly, the effects of sizes and locations of a single air bubble in a cylindrical cavity with two open ends on resonant frequencies are investigated numerically. Then, a transient measurement system for ARS is established, and the trends of the resonant frequencies (RFs) and resonant amplitudes (RAs) in the cylindrical cavity with gas flux inside are investigated experimentally. The measurement results by the proposed transient method are compared with those by steady-state ones and numerical ones. The numerical results show that the RFs of the cavity are highly sensitive to the volume of the single air bubble. A tiny bubble volume perturbation may cause a prominent RF shift even though the volume of the air bubble is smaller than 0.1% of that of the cavity. When the small air bubble moves, the RF shift will change and reach its maximum value as it is located at the middle of the cavity. As the gas volume fraction of the two-phase flow is low, both the RFs and RAs from the measurement results decrease dramatically with the increasing gas volume, and this decreasing trend gradually becomes even as the gas volume fraction increases further. These experimental results agree with the theoretical ones qualitatively. In addition, the transient method for ARS is more suitable for measuring the gas volume fraction with randomness and instantaneity than the steady-state one, because the latter could not reflect the random and instant characteristics of the mixed fluid due to the time consumption for frequency sweeping. This study will play a very important role in the quantitative measurement of the gas volume fraction of multiphase flows.
Energy Technology Data Exchange (ETDEWEB)
Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)
2016-05-15
The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and
Directory of Open Access Journals (Sweden)
Carl Wolfgang S Pintzka
2015-07-01
Full Text Available To date, there is no consensus whether sexual dimorphism in the size of neuroanatomical structures exists, or if such differences are caused by choice of intracranial volume (ICV correction method. When investigating volume differences in neuroanatomical structures, corrections for variation in ICV are used. Commonly applied methods are the ICV-proportions, ICV-residuals and ICV as a covariate of no interest, ANCOVA. However, these different methods give contradictory results with regard to presence of sex differences. Our aims were to investigate presence of sexual dimorphism in 18 neuroanatomical volumes unrelated to ICV-differences by using a large ICV-matched subsample of 304 men and women from the HUNT-MRI general population study, and further to demonstrate in the entire sample of 966 healthy subjects, which of the ICV-correction methods gave results similar to the ICV-matched subsample. In addition, sex-specific subsamples were created to investigate whether differences were an effect of head size or sex. Most sex differences were related to volume scaling with ICV, independent of sex. Sex differences were detected in a few structures; amygdala, cerebellar cortex, and 3rd ventricle were larger in men, but the effect sizes were small. The residuals and ANCOVA methods were most effective at removing the effects of ICV. The proportions method suffered from systematic errors due to lack of proportionality between ICV and neuroanatomical volumes, leading to systematic mis-assignment of structures as either larger or smaller than their actual size. Adding additional sexual dimorphic covariates to the ANCOVA gave opposite results of those obtained in the ICV-matched subsample or with the residuals method. The findings in the current study explain some of the considerable variation in the literature on sexual dimorphisms in neuroanatomical volumes. In conclusion, sex plays a minor role for neuroanatomical volume differences; most differences are
Cell-centered nonlinear finite-volume methods for the heterogeneous anisotropic diffusion problem
Terekhov, Kirill M.; Mallison, Bradley T.; Tchelepi, Hamdi A.
2017-02-01
We present two new cell-centered nonlinear finite-volume methods for the heterogeneous, anisotropic diffusion problem. The schemes split the interfacial flux into harmonic and transversal components. Specifically, linear combinations of the transversal vector and the co-normal are used that lead to significant improvements in terms of the mesh-locking effects. The harmonic component of the flux is represented using a conventional monotone two-point flux approximation; the component along the parameterized direction is treated nonlinearly to satisfy either positivity of the solution as in [29], or the discrete maximum principle as in [9]. In order to make the method purely cell-centered, we derive a homogenization function that allows for seamless interpolation in the presence of heterogeneity following a strategy similar to [46]. The performance of the new schemes is compared with existing multi-point flux approximation methods [3,5]. The robustness of the scheme with respect to the mesh-locking problem is demonstrated using several challenging test cases.
Directory of Open Access Journals (Sweden)
Ye. S. Sherina
2014-01-01
Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.
Energy Technology Data Exchange (ETDEWEB)
Talukdar, P.; Steven, M.; Issendorff, F.V.; Trimis, D. [Institute of Fluid Mechanics (LSTM), University of Erlangen-Nuremberg, Cauerstrasse 4, D 91058 Erlangen (Germany)
2005-10-01
The finite volume method of radiation is implemented for complex 3-D problems in order to use it for combined heat transfer problems in connection with CFD codes. The method is applied for a 3-D block structured grid in a radiatively participating medium. The method is implemented in non-orthogonal curvilinear coordinates so that it can handle irregular structure with a body-fitted structured grid. The multiblocking is performed with overlapping blocks to exchange the information between the blocks. Five test problems are considered in this work. In the first problem, present work is validated with the results of the literature. To check the accuracy of multiblocking, a single block is divided into four blocks and results are validated against the results of the single block simulated alone in the second problem. Complicated geometries are considered to show the applicability of the present procedure in the last three problems. Both radiative and non-radiative equilibrium situations are considered along with an absorbing, emitting and scattering medium. (author)
Blast load estimation using Finite Volume Method and linear heat transfer
Directory of Open Access Journals (Sweden)
Lidner Michał
2016-01-01
Full Text Available From the point of view of people and building security one of the main destroying factor is the blast load. Rational estimating of its results should be preceded with knowledge of complex wave field distribution in time and space. As a result one can estimate the blast load distribution in time. In considered conditions, the values of blast load are estimating using the empirical functions of overpressure distribution in time (Δp(t. The Δp(t functions are monotonic and are the approximation of reality. The distributions of these functions are often linearized due to simplifying of estimating the blast reaction of elements. The article presents a method of numerical analysis of the phenomenon of the air shock wave propagation. The main scope of this paper is getting the ability to make more realistic the Δp(t functions. An explicit own solution using Finite Volume Method was used. This method considers changes in energy due to heat transfer with conservation of linear heat transfer. For validation, the results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied.
Energy Technology Data Exchange (ETDEWEB)
Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)
1996-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Ribeiro Guevara, S
2001-01-01
determine, by this method, the production cross section for the ground state even in those cases where the assumption that one of the states has decayed completely, cannot be warranted.In cases where the ground state half life is much longer than that of the isomeric state, when employing direct methods, it is only possible to determine the sum cross section for the production of both states; therefore, if the metastable production cross section is known, then it is possible to determine, by subtraction, the ground state production cross section.When using the straight-line method, both cross sections can be determined separately.The main limitation of the method is the need to measure the g emission associated to the ground state decay at different time intervals.A parametric analysis of the equations associated to the method shows that under certain conditions it is not possible to apply the method, while under other conditions the method delivers optimum results.The method was applied to the study of four ...
Yu, Ting-To
2013-04-01
It is important to acquire the volume of landslide in short period of time. For hazard mitigation and also emergency response purpose, the traditional method takes much longer time than expected. Due to the weather limit, traffic accessibility and many regulations of law, it take months to handle these process before the actual carry out of filed work. Remote sensing imagery can get the data as long as the visibility allowed, which happened only few day after the event. While traditional photometry requires a stereo pairs images to produce the post event DEM for calculating the change of volume. Usually have to wait weeks or even months for gathering such data, LiDAR or ground GPS measurement might take even longer period of time with much higher cost. In this study we use one post event satellite image and pre-event DTM to compare the similarity between these by alter the DTM with genetic algorithms. The outcome of smartest guess from GAs shall remove or add exact values of height at each location, which been converted into shadow relief viewgraph to compare with satellite image. Once the similarity threshold been make then the guessing work stop. It takes only few hours to finish the entire task, the computed accuracy is around 70% by comparing to the high resolution LiDAR survey at a landslide, southern Taiwan. With extra GCPs, the estimate accuracy can improve to 85% and also within few hours after the receiving of satellite image. Data of this demonstration case is a 5 m DTM at 2005, 2M resolution FormoSat optical image at 2009 and 5M LiDAR at 2010. The GAs and image similarity code is developed on Matlab at windows PC.
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Francisco [Federal University of Ceara, Fortaleza (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br; Varavei, Abdoljalil; Sepehrnoori, Kamy [The University of Texas at Austin (United States). Petroleum and Geosystems Engineering Dept.], e-mails: varavei@mail.utexas.edu, kamys@mail.utexas.edu
2010-07-01
An element-based finite-volume approach in conjunction with unstructured grids for naturally fractured compositional reservoir simulation is presented. In this approach, both the discrete fracture and the matrix mass balances are taken into account without any additional models to couple the matrix and discrete fractures. The mesh, for two dimensional domains, can be built of triangles, quadrilaterals, or a mix of these elements. However, due to the available mesh generator to handle both matrix and discrete fractures, only results using triangular elements will be presented. The discrete fractures are located along the edges of each element. To obtain the approximated matrix equation, each element is divided into three sub-elements and then the mass balance equations for each component are integrated along each interface of the sub-elements. The finite-volume conservation equations are assembled from the contribution of all the elements that share a vertex, creating a cell vertex approach. The discrete fracture equations are discretized only along the edges of each element and then summed up with the matrix equations in order to obtain a conservative equation for both matrix and discrete fractures. In order to mimic real field simulations, the capillary pressure is included in both matrix and discrete fracture media. In the implemented model, the saturation field in the matrix and discrete fractures can be different, but the potential of each phase in the matrix and discrete fracture interface needs to be the same. The results for several naturally fractured reservoirs are presented to demonstrate the applicability of the method. (author)
Stochastic averaging of quasi-Hamiltonian systems
Institute of Scientific and Technical Information of China (English)
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
Ashwin, T. R.; McGordon, A.; Widanage, W. D.; Jennings, P. A.
2017-02-01
The Pseudo Two Dimensional (P2D) porous electrode model is less preferred for real time calculations due to the high computational expense and complexity in obtaining the wide range of electro-chemical parameters despite of its superior accuracy. This paper presents a finite volume based method for re-parametrising the P2D model for any cell chemistry with uncertainty in determining precise electrochemical parameters. The re-parametrisation is achieved by solving a quadratic form of the Butler-Volmer equation and modifying the anode open circuit voltage based on experimental values. Thus the only experimental result, needed to re-parametrise the cell, reduces to the measurement of discharge voltage for any C-rate. The proposed method is validated against the 1C discharge data and an actual drive cycle of a NCR18650BD battery with NCA chemistry when driving in an urban environment with frequent accelerations and regenerative braking events. The error limit of the present model is compared with the electro-chemical prediction of LiyCoO2 battery and found to be superior to the accuracy of the model presented in the literature.
A volume of fluid method for simulating fluid/fluid interfaces in contact with solid boundaries
Mahady, Kyle; Kondic, Lou
2014-01-01
In this paper, we present a novel approach to model the fluid/solid interaction forces of a general van der Waals type in a direct solver of the Navier-Stokes equations based on the volume of fluid interface tracking method. The key ingredient of the model is the explicit inclusion of the fluid/solid interaction forces into the governing equations. We show that the interaction forces lead to a partial wetting condition and in particular to a natural definition of an equilibrium contact angle. We present two numerical approaches for the discretization of the interaction forces that enter the model. These two approaches are found to be complementary in terms of convergence properties and complexity. To validate the computational framework, we consider the application of these models to simulate two-dimensional drops at equilibrium, as well as drop spreading. We find that the proposed methods can accurately describe the physics of the considered problems. In general, the model allows for the accurate treatment o...
FINITE VOLUME METHODS AND ADAPTIVE REFINEMENT FOR GLOBAL TSUNAMI PROPAGATION AND LOCAL INUNDATION
Directory of Open Access Journals (Sweden)
David L. George
2006-01-01
Full Text Available The shallow water equations are a commonly accepted approximation governing tsunami propagation. Numerically capturing certain features of local tsunami inundation requires solving these equations in their physically relevant conservative form, as integral con- servation laws for depth and momentum. This form of the equations presents challenges when trying to numerically model global tsunami propagation, so often the best numerical methods for the local inundation regime are not suitable for the global propagation regime. The different regimes of tsunami flow belong to different spatial scales as well, and re- quire correspondingly different grid resolutions. The long wavelength of deep ocean tsunamis requires a large global scale computing domain, yet near the shore the propa- gating energy is compressed and focused by bathymetry in unpredictable ways. This can lead to large variations in energy and run-up even over small localized regions.We have developed a finite volume method to deal with the diverse flow regimes of tsunamis. These methods are well suited for the inundation regime—they are robust in the presence of bores and steep gradients, or drying regions, and can capture the inundating shoreline and run-up features. Additionally, these methods are well-balanced, meaning that they can appropriately model global propagation.To deal with the disparate spatial scales, we have used adaptive refinement algorithms originally developed for gas dynamics, where often steep variation is highly localized at a given time, but moves throughout the domain. These algorithms allow evolving Cartesian sub-grids that can move with the propagating waves and highly resolve local inundation of impacted areas in a single global scale computation. Because the dry regions are part of the computing domain, simple rectangular cartesian grids eliminate the need for complex shoreline-fitted mesh generation.
Cen, Wei; Hoppe, Ralph; Lu, Rongbo; Cai, Zhaoquan; Gu, Ning
2017-08-01
In this paper, the relationship between electromagnetic power absorption and temperature distributions inside highly heterogeneous biological samples was accurately determinated using finite volume method. An in-vitro study on pineal gland that is responsible for physiological activities was for the first time simulated to illustrate effectiveness of the proposed method.
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
The Method of Average Fairness Degree for Seats Distribution%席位分配的平均公平度方法
Institute of Scientific and Technical Information of China (English)
丁会; 李波
2013-01-01
定义个体相对于总体的公平程度,即个体公平度与总体绝对公平度的比值,当比值趋于1时,就说明分配方案使该个体满意.利用方差的概念定义平均公平度,使个体公平程度相对于总体的公平程度的差距最小,等价于每一个个体公平度都很接近,并且趋于1,每个个体的公平程度达到最大,此时座位分配最为公平.%The definition of individual fairness degree which is relative to the overall is the ratio of individual fairness degree to the overall absolute fairness.When the ratio tends to be 1,it means the distribution plan satisfies the individual.By using the concept of variance,the average fairness degree is defined.Meanwhile,the difference between individual and overall fairness degrees can be minimized.Furthermore,every individual fairness degree is close to each other and tends to be 1.Consequently,every individual fairness degree tends to be maximized.Therefore,the seats distribution scheme is most fair for everyone.
Negative Average Preference Utilitarianism
Directory of Open Access Journals (Sweden)
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Variant of a volume-of-fluid method for surface tension-dominant two-phase flows
Indian Academy of Sciences (India)
G Biswas
2013-12-01
The capabilities of the volume-of-fluid method for the calculation of surface tension-dominant two-phase flows are explained. The accurate calculation of the interface remains a problem for the volume-of-fluid method if the density ratios of the fluids in different phases are high. The simulations of bubble growth is performed in water at near critical pressure for different degrees of superheat using combined levelset and volume-of fluid (CLSVOF) method. The effect of superheat on the frequency of bubble formation was analyzed. A deviation from the periodic bubble release is observed in the case of superheat of 20 K in water. The vapor-jet-like columnar structure is observed. Effect of heat flux on the slender vapor column has also been explained.
Hejazialhosseini, Babak; Rossinelli, Diego; Bergdorf, Michael; Koumoutsakos, Petros
2010-11-01
We present a space-time adaptive solver for single- and multi-phase compressible flows that couples average interpolating wavelets with high-order finite volume schemes. The solver introduces the concept of wavelet blocks, handles large jumps in resolution and employs local time-stepping for efficient time integration. We demonstrate that the inherently sequential wavelet-based adaptivity can be implemented efficiently in multicore computer architectures using task-based parallelism and introducing the concept of wavelet blocks. We validate our computational method on a number of benchmark problems and we present simulations of shock-bubble interaction at different Mach numbers, demonstrating the accuracy and computational performance of the method.
Institute of Scientific and Technical Information of China (English)
任留成; 吕泗洲
2013-01-01
A kind of new map projection, called multi-level combined projection, was designed in this paper, which was suitable for the geographic grid system of China. It was also the hierarchy grid system partitioned by the latitude 1°, 10°, et al. The basic idea here was to divide the ellipsoid averagely to some level along with latitude according to the theory of differential geometry, and then to establish the projection model for each level. Therefore a new kind map projection was obtained. This kind of map projection could be subdivision according to the different grid scale, and could be developed to a kind of dynamic map projection which was appropriated for multi-resolution grid model. It is show by the distortion computation that the map projection is conformal, and the area distortion and length distortion is also small. Especially in the high latitude area, the distortions are apparently decreased comparing with Mercator projection.%针对中国地理格网(1°、10°等多级格网系统)的分割方法,设计了一种适合该格网系统的新型地图投影——分层组合投影.从微分几何的观点出发,把地球椭球按等纬度分割成若干层圆台,分别建立每个圆台的投影模型,即可得到一种地图投影.这种投影还可根据格网间隔的不同进行细分,从而发展成为一种适合多分辨率格网模型的动态地图投影.通过对该投影进行变形计算表明,该投影可以保持等角,而且面积和长度变形都很小,特别是在高纬度地区,与Mercator投影相比变形明显减小.
Andriani, Tri; Irawan, Mohammad Isa
2017-08-01
Ebola Virus Disease (EVD) is a disease caused by a virus of the genus Ebolavirus (EBOV), family Filoviridae. Ebola virus is classifed into five types, namely Zaire ebolavirus (ZEBOV), Sudan ebolavirus (SEBOV), Bundibugyo ebolavirus (BEBOV), Tai Forest ebolavirus also known as Cote d'Ivoire ebolavirus (CIEBOV), and Reston ebolavirus (REBOV). Identification of kinship types of Ebola virus can be performed using phylogenetic trees. In this study, the phylogenetic tree constructed by UPGMA method in which there are Multiple Alignment using Progressive Method. The results concluded that the phylogenetic tree formation kinship ebola virus types that kind of Tai Forest ebolavirus close to Bundibugyo ebolavirus but the layout state ebola epidemic spread far apart. The genetic distance for this type of Bundibugyo ebolavirus with Tai Forest ebolavirus is 0.3725. Type Tai Forest ebolavirus similar to Bundibugyo ebolavirus not inuenced by the proximity of the area ebola epidemic spread.
Energy Technology Data Exchange (ETDEWEB)
Śpiewak, Mateusz, E-mail: mspiewak@ikard.pl [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Małek, Łukasz A., E-mail: lmalek@ikard.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Interventional Cardiology and Angiology, Institute of Cardiology, Warsaw (Poland); Petryka, Joanna, E-mail: joannapetryka@hotmail.com [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Mazurkiewicz, Łukasz, E-mail: lmazurkiewicz@ikard.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Cardiomyopathy, Institute of Cardiology, Warsaw (Poland); Miłosz, Barbara, E-mail: barbara-milosz@o2.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Radiology, Institute of Cardiology, Warsaw (Poland); Biernacka, Elżbieta K., E-mail: kbiernacka@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Kowalski, Mirosław, E-mail: mkowalski@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Hoffman, Piotr, E-mail: phoffman@ikard.pl [Department of Congenital Heart Diseases, Institute of Cardiology, Warsaw (Poland); Demkow, Marcin, E-mail: mdemkow@ikard.pl [Department of Coronary Artery Disease and Structural Heart Diseases, Institute of Cardiology, Warsaw (Poland); Miśko, Jolanta, E-mail: jmisko@wp.pl [Cardiac Magnetic Resonance Unit, Institute of Cardiology, Warsaw (Poland); Department of Radiology, Institute of Cardiology, Warsaw (Poland); Rużyłło, Witold, E-mail: wruzyllo@ikard.pl [Institute of Cardiology, Warsaw (Poland)
2012-10-15
Background: Previous studies have advocated quantifying pulmonary regurgitation (PR) by using PR volume (PRV) instead of commonly used PR fraction (PRF). However, physicians are not familiar with the use of PRV in clinical practice. The ratio of right ventricle (RV) volume to left ventricle volume (RV/LV) may better reflect the impact of PR on the heart than RV end-diastolic volume (RVEDV) alone. We aimed to compare the impact of PRV and PRF on RV size expressed as either the RV/LV ratio or RVEDV (mL/m{sup 2}). Methods: Consecutive patients with repaired tetralogy of Fallot were included (n = 53). PRV, PRF and ventricular volumes were measured with the use of cardiac magnetic resonance. Results: RVEDV was more closely correlated with PRV when compared with PRF (r = 0.686, p < 0.0001, and r = 0.430, p = 0.0014, respectively). On the other hand, both PRV and PRF showed a good correlation with the RV/LV ratio (r = 0.691, p < 0.0001, and r = 0.685, p < 0.0001, respectively). Receiver operating characteristic analysis showed that both measures of PR had similar ability to predict severe RV dilatation when the RV/LV ratio-based criterion was used, namely the RV/LV ratio > 2.0 [area under the curve (AUC){sub PRV} = 0.770 vs AUC{sub PRF} = 0.777, p = 0.86]. Conversely, with the use of the RVEDV-based criterion (>170 mL/m{sup 2}), PRV proved to be superior over PRF (AUC{sub PRV} = 0.770 vs AUC{sub PRF} = 0.656, p = 0.0028]. Conclusions: PRV and PRF have similar significance as measures of PR when the RV/LV ratio is used instead of RVEDV. The RV/LV ratio is a universal marker of RV dilatation independent of the method of PR quantification applied (PRF vs PRV)
Energy Technology Data Exchange (ETDEWEB)
Tae, Woo Suk; Lee, Kang Uk; Nam, Eui-Cheol; Kim, Keun Woo [Kangwon National University College of Medicine, Neuroscience Research Institute, Kangwon (Korea); Kim, Sam Soo [Kangwon National University College of Medicine, Neuroscience Research Institute, Kangwon (Korea); Kangwon National University Hospital, Department of Radiology, Kangwon-do (Korea)
2008-07-15
To validate the usefulness of the packages available for automated hippocampal volumetry, we measured hippocampal volumes using one manual and two recently developed automated volumetric methods. The study included T1-weighted magnetic resonance imaging (MRI) of 21 patients with chronic major depressive disorder (MDD) and 20 normal controls. Using coronal turbo field echo (TFE) MRI with a slice thickness of 1.3 mm, the hippocampal volumes were measured using three methods: manual volumetry, surface-based parcellation using FreeSurfer, and individual atlas-based volumetry using IBASPM. In addition, the intracranial cavity volume (ICV) was measured manually. The absolute left hippocampal volume of the patients with MDD measured using all three methods was significantly smaller than the left hippocampal volume of the normal controls (manual P=0.029, FreeSurfer P=0.035, IBASPM P=0.018). After controlling for the ICV, except for the right hippocampal volume measured using FreeSurfer, both measured hippocampal volumes of the patients with MDD were significantly smaller than the measured hippocampal volumes of the normal controls (right manual P=0.019, IBASPM P=0.012; left manual P=0.003, FreeSurfer P=0.010, IBASPM P=0.002). In the intrarater reliability test, the intraclass correlation coefficients (ICCs) were all excellent (manual right 0.947, left 0.934; FreeSurfer right 1.000, left 1.000; IBASPM right 1.000, left 1.000). In the test of agreement between the volumetric methods, the ICCs were right 0.846 and left 0.848 (manual and FreeSurfer), and right 0.654 and left 0.717 (manual and IBASPM). The automated hippocampal volumetric methods showed good agreement with manual hippocampal volumetry, but the volume measured using FreeSurfer was 35% larger and the agreement was questionable with IBASPM. Although the automated methods could detect hippocampal atrophy in the patients with MDD, the results indicate that manual hippocampal volumetry is still the gold standard
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 2, Sample preparation methods
Energy Technology Data Exchange (ETDEWEB)
1993-08-01
This volume contains the interim change notice for sample preparation methods. Covered are: acid digestion for metals analysis, fusion of Hanford tank waste solids, water leach of sludges/soils/other solids, extraction procedure toxicity (simulate leach in landfill), sample preparation for gamma spectroscopy, acid digestion for radiochemical analysis, leach preparation of solids for free cyanide analysis, aqueous leach of solids for anion analysis, microwave digestion of glasses and slurries for ICP/MS, toxicity characteristic leaching extraction for inorganics, leach/dissolution of activated metal for radiochemical analysis, extraction of single-shell tank (SST) samples for semi-VOC analysis, preparation and cleanup of hydrocarbon- containing samples for VOC and semi-VOC analysis, receiving of waste tank samples in onsite transfer cask, receipt and inspection of SST samples, receipt and extrusion of core samples at 325A shielded facility, cleaning and shipping of waste tank samplers, homogenization of solutions/slurries/sludges, and test sample preparation for bioassay quality control program.
Impact erosion prediction using the finite volume particle method with improved constitutive models
Leguizamón, Sebastián; Jahanbakhsh, Ebrahim; Maertens, Audrey; Vessaz, Christian; Alimirzazadeh, Siamak; Avellan, François
2016-11-01
Erosion damage in hydraulic turbines is a common problem caused by the high- velocity impact of small particles entrained in the fluid. In this investigation, the Finite Volume Particle Method is used to simulate the three-dimensional impact of rigid spherical particles on a metallic surface. Three different constitutive models are compared: the linear strainhardening (L-H), Cowper-Symonds (C-S) and Johnson-Cook (J-C) models. They are assessed in terms of the predicted erosion rate and its dependence on impact angle and velocity, as compared to experimental data. It has been shown that a model accounting for strain rate is necessary, since the response of the material is significantly tougher at the very high strain rate regime caused by impacts. High sensitivity to the friction coefficient, which models the cutting wear mechanism, has been noticed. The J-C damage model also shows a high sensitivity to the parameter related to triaxiality, whose calibration appears to be scale-dependent, not exclusively material-determined. After calibration, the J-C model is capable of capturing the material's erosion response to both impact velocity and angle, whereas both C-S and L-H fail.
Institute of Scientific and Technical Information of China (English)
Min LIU; Keqi WU
2008-01-01
Based on the immersed boundary method (IBM) and the finite volume optimized pre-factored compact (FVOPC) scheme, a numerical simulation of noise propagation inside and outside the casing of a cross flow fan is estab-lished. The unsteady linearized Euler equations are solved to directly simulate the aero-acoustic field. In order to validate the FVOPC scheme, a simulation case: one dimensional linear wave propagation problem is carried out using FVOPC scheme, DRP scheme and HOC scheme. The result of FVOPC is in good agreement with the ana-lytic solution and it is better than the results of DRP and HOC schemes, the FVOPC is less dispersion and dissi-pation than DRP and HOC schemes. Then, numerical simulation of noise propagation problems is performed. The noise field of 36 compact rotating noise sources is obtained with the rotating velocity of 1000r/min. The PML absorbing boundary condition is applied to the sound far field boundary condition for depressing the numerical reflection. Wall boundary condition is applied to the casing. The results show that there are reflections on the casing wall and sound wave interference in the field. The FVOPC with the IBM is suitable for noise propagation problems under the complex geometries for depressing the dispersion and dissipation, and also keeping the high order precision.
Soltanmoradi, Elmira; Shokri, Babak
2017-05-01
In this article, the electromagnetic wave scattering from plasma columns with inhomogeneous electron density distribution is studied by the Green's function volume integral equation method. Due to the ready production of such plasmas in the laboratories and their practical application in various technological fields, this study tries to find the effects of plasma parameters such as the electron density, radius, and pressure on the scattering cross-section of a plasma column. Moreover, the incident wave frequency influence of the scattering pattern is demonstrated. Furthermore, the scattering cross-section of a plasma column with an inhomogeneous collision frequency profile is calculated and the effect of this inhomogeneity is discussed first in this article. These results are especially used to determine the appropriate conditions for radar cross-section reduction purposes. It is shown that the radar cross-section of a plasma column reduces more for a larger collision frequency, for a relatively lower plasma frequency, and also for a smaller radius. Furthermore, it is found that the effect of the electron density on the scattering cross-section is more obvious in comparison with the effect of other plasma parameters. Also, the plasma column with homogenous collision frequency can be used as a better shielding in contrast to its inhomogeneous counterpart.
Institute of Scientific and Technical Information of China (English)
Xin Wei; Bing Sun
2011-01-01
The fluid-structure interaction may occur in space launch vehicles,which would lead to bad performance of vehicles,damage equipments on vehicles,or even affect astronauts' health.In this paper,analysis on dynamic behavior of liquid oxygen (LOX) feeding pipe system in a large scale launch vehicle is performed,with the effect of fluid-structure interaction (FSI) taken into consideration.The pipe system is simplified as a planar FSI model with Poisson coupling and junction coupling.Numerical tests on pipes between the tank and the pump are solved by the finite volume method.Results show that restrictions weaken the interaction between axial and lateral vibrations.The reasonable results regarding frequencies and modes indicate that the FSI affects substantially the dynamic analysis,and thus highlight the usefulness of the proposed model.This study would provide a reference to the pipe test,as well as facilitate further studies on oscillation suppression.
NUMERICAL RESEARCH ON WATER GUIDE BEARING OF HYDRO-GENERATOR UNIT USING FINITE VOLUME METHOD
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
With the consideration of the geometry of tilting pad journal bearing, a new form of the Reynolds equation was derived in this article. The film thickness, the squeeze motion of the journal and the rotation motion of the pad were explicitly contained in the equation. Based on this equation, together with the equilibrium equation of pad pivot, the water guide bearing used in the Gezhouba 10 F hydro-generator unit was numerically researched. The new Reynolds equation for the lubricating film was solved using Finite Volume (FV) discretization, Successive Over-Relaxation (SOR) iteration method and C++ code are included. According to the numerical solution, and the stability of the film and the influences of the film thickness, the journal squeeze effect and the pad rotation effect on film force were discussed. The results indicate that the squeeze effect can not be neglected, although the rotation effect is negligible for both low-speed and high-speed bearings, so the computing time could be greatly reduced.
Two-dimensional thermal analysis of a fuel rod by finite volume method
Energy Technology Data Exchange (ETDEWEB)
Costa, Rhayanne Y.N.; Silva, Mario A.B. da; Lira, Carlos A.B. de O., E-mail: ryncosta@gmail.com, E-mail: mabs500@gmail.com, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamaento de Energia Nuclear
2015-07-01
In a nuclear reactor, the amount of power generation is limited by thermal and physic limitations rather than by nuclear parameters. The operation of a reactor core, considering the best heat removal system, must take into account the fact that the temperatures of fuel and cladding shall not exceed safety limits anywhere in the core. If such considerations are not considered, damages in the fuel element may release huge quantities of radioactive materials in the coolant or even core meltdown. Thermal analyses for fuel rods are often accomplished by considering one-dimensional heat diffusion equation. The aim of this study is to develop the first paper to verify the temperature distribution for a two-dimensional heat transfer problem in an advanced reactor. The methodology is based on the Finite Volume Method (FVM), which considers a balance for the property of interest. The validation for such methodology is made by comparing numerical and analytical solutions. For the two-dimensional analysis, the results indicate that the temperature profile agree with expected physical considerations, providing quantitative information for the development of advanced reactors. (author)
Wang, Junwei; Wang, Zhiping; Lu, Yang; Cheng, Bo
2013-03-01
The castings defects are affected by the melting volume change rate of material. The change rate has an important effect on running safety of the high temperature thermal storage chamber, too. But the characteristics of existing measuring installations are complex structure, troublesome operation and low precision. In order to measure the melting volume change rate of material accurately and conveniently, a self-designed measuring instrument, self-heating probe instrument, and measuring method are described. Temperature in heating cavity is controlled by PID temperature controller; melting volume change rate υ and molten density are calculated based on the melt volume which is measured by the instrument. Positive and negative υ represent expansion and shrinkage of the sample volume after melting, respectively. Taking eutectic LiF+CaF2 for example, its melting volume change rate and melting density at 1 123 K are -20.6% and 2 651 kg·m-3 measured by this instrument, which is only 0.71% smaller than literature value. Density and melting volume change rate of industry pure aluminum at 973 K and analysis pure NaCl at 1 123 K are detected by the instrument too. The measure results are agreed with report values. Measuring error sources are analyzed and several improving measures are proposed. In theory, the measuring errors of the change rate and molten density which are measured by the self-designed instrument is nearly 1/20-1/50 of that measured by the refitted mandril thermal expansion instrument. The self-designed instrument and method have the advantages of simple structure, being easy to operate, extensive applicability for material, relatively high accuracy, and most importantly, temperature and sample vapor pressure have little effect on the measurement accuracy. The presented instrument and method solve the problems of complicated structure and procedures, and large measuring errors for the samples with high vapor pressure by existing installations.
A new interpolation method to model thickness, isopachs, extent, and volume of tephra fall deposits
Yang, Qingyuan; Bursik, Marcus
2016-10-01
Tephra thickness distribution is the primary piece of information used to reconstruct the histories of past explosive volcanic eruptions. We present a method for modeling tephra thickness with less subjectivity than is the case with hand-drawn isopachs, the current, most frequently used method. The algorithm separates the thickness of a tephra fall deposit into a trend and local variations and models them separately using segmented linear regression and ordinary kriging. The distance to the source vent and downwind distance are used to characterize the trend model. The algorithm is applied to thickness datasets for the Fogo Member A and North Mono Bed 1 tephras. Simulations on subsets of data and cross-validation are implemented to test the effectiveness of the algorithm in the construction of the trend model and the model of local variations. The results indicate that model isopach maps and volume estimations are consistent with previous studies and point to some inconsistencies in hand-drawn maps and their interpretation. The most striking feature noticed in hand-drawn mapping is a lack of adherence to the data in drawing isopachs locally. Since the model assumes a stable wind field, divergences from the predicted decrease in thickness with distance are readily noticed. Hence, wind direction, although weak in the case of Fogo A, was not unidirectional during deposition. A combination of the isopach algorithm with a new, data transformation can be used to estimate the extent of fall deposits. A limitation of the algorithm is that one must estimate "by hand" the wind direction based on the thickness data.
Two-dimensional finite volume method for dam-break flow simulation
Institute of Scientific and Technical Information of China (English)
M.ALIPARAST
2009-01-01
A numerical model based upon a second-order upwind cell-center finite volume method on unstructured triangular grids is developed for solving shallow water equations.The assumption of a small depth downstream instead of a dry bed situation changes the wave structure and the propagation speed of the front which leads to incorrect results.The use of Harten-Lax-vau Leer (HLL) allows handling of wet/dry treatment.By usage of the HLL approximate Riemann solver,also it make possible to handle discontinuous solutions.As the assumption of a very small depth downstream of the dam can change the nature of the dam break flow problem which leads to incorrect results,the HLL approximate Riemann solver is used for the computation of inviscid flux functions,which makes it possible to handle discontinuous solutions.A multidimensional slope-limiting technique is applied to achieve second-order spatial accuracy and to prevent spurious oscillations.To alleviate the problems associated with numerical instabilities due to small water depths near a wet/dry boundary,the friction source terms are treated in a fully implicit way.A third-order Runge-Kutta method is used for the time integration of semi-discrete equations.The developed numerical model has been applied to several test cases as well as to real flows.The tests are tested in two cases:oblique hydraulic jump and experimental dam break in converging-diverging flume.Numerical tests proved the robustness and accuracy of the model.The model has been applied for simulation of dam break analysis of Torogh in Irun.And finally the results have been used in preparing EAP (Emergency Action Plan).
Directory of Open Access Journals (Sweden)
Jungki Lee
2015-01-01
Full Text Available The parallel volume integral equation method (PVIEM is applied for the analysis of elastic wave scattering problems in an unbounded isotropic solid containing multiple multilayered anisotropic elliptical inclusions. This recently developed numerical method does not require the use of Green’s function for the multilayered anisotropic inclusions; only Green’s function for the unbounded isotropic matrix is needed. This method can also be applied to solve general two- and three-dimensional elastodynamic problems involving inhomogeneous and/or multilayered anisotropic inclusions whose shape and number are arbitrary. A detailed analysis of the SH wave scattering is presented for multiple triple-layered orthotropic elliptical inclusions. Numerical results are presented for the displacement fields at the interfaces for square and hexagonal packing arrays of triple-layered elliptical inclusions in a broad frequency range of practical interest. It is necessary to use standard parallel programming, such as MPI (message passing interface, to speed up computation in the volume integral equation method (VIEM. Parallel volume integral equation method as a pioneer of numerical analysis enables us to investigate the effects of single/multiple scattering, fiber packing type, fiber volume fraction, single/multiple layer(s, multilayer’s shape and geometry, isotropy/anisotropy, and softness/hardness of the multiple multilayered anisotropic elliptical inclusions on displacements at the interfaces of the inclusions.
Systems and methods for the detection of low-level harmful substances in a large volume of fluid
Carpenter, Michael V.; Roybal, Lyle G.; Lindquist, Alan; Gallardo, Vincente
2016-03-15
A method and device for the detection of low-level harmful substances in a large volume of fluid comprising using a concentrator system to produce a retentate and analyzing the retentate for the presence of at least one harmful substance. The concentrator system performs a method comprising pumping at least 10 liters of fluid from a sample source through a filter. While pumping, the concentrator system diverts retentate from the filter into a container. The concentrator system also recirculates at least part of the retentate in the container again through the filter. The concentrator system controls the speed of the pump with a control system thereby maintaining a fluid pressure less than 25 psi during the pumping of the fluid; monitors the quantity of retentate within the container with a control system, and maintains a reduced volume level of retentate and a target volume of retentate.
Institute of Scientific and Technical Information of China (English)
Chuang Nie; Mao-Nian Zhang; Hong-Wei Zhao; Thomas D Olsen; Kyle Jackman; Lian-Na Hu; Wen-Ping Ma
2015-01-01
Background:In vivo quantification of choroidal neovascularization (CNV) based on noninvasive optical coherence tomography (OCT) examination and in vitro choroidal flatmount immunohistochemistry stained of CNV currently were used to evaluate the process and severity of age-related macular degeneration (AMD) both in human and animal studies.This study aimed to investigate the correlation between these two methods in murine CNV models induced by subretinal injection.Methods:CNV was developed in 20 C57BL6/j mice by subretinal injection of adeno-associated viral delivery of a short hairpin RNA targeting sFLT-1 (AAV.shRNA.sFLT-1),as reported previously.After 4 weeks,CNV was imaged by OCT and fluorescence angiography.The scaling factors for each dimension,x,y,and z (μm/pixel) were recorded,and the corneal curvature standard was adjusted from human (7.7) to mice (1.4).The volume of each OCT image stack was calculated and then normalized by multiplying the number of voxels by the scaling factors for each dimension in Seg3D software (University of Utah Scientific Computing and Imaging Institute,available at http://www.sci.utah.edu/cibc-software/seg3d.html).Eighteen mice were prepared for choroidal flatmounts and stained by CD31.The CNV volumes were calculated using scanning laser confocal microscopy after immunohistochemistry staining.Two mice were stained by Hematoxylin and Eosin for observing the CNV morphology.Results:The CNV volume calculated using OCT was,on average,2.6 times larger than the volume calculated using the laser confocal microscopy.The correlation statistical analysis showed OCT measuring of CNV correlated significantly with the in vitro method (R2 =0.448,P=0.001,n =18).The correlation coefficient for CNV quantification using OCT and confocal microscopy was 0.693 (n =18,P =0.001).Conclusions:There is a fair linear correlation on CNV volumes between in vivo and in vitro methods in CNV models induced by subretinal injection.The result might provide a useful
Directory of Open Access Journals (Sweden)
Chuang Nie
2015-01-01
Full Text Available Background: In vivo quantification of choroidal neovascularization (CNV based on noninvasive optical coherence tomography (OCT examination and in vitro choroidal flatmount immunohistochemistry stained of CNV currently were used to evaluate the process and severity of age-related macular degeneration (AMD both in human and animal studies. This study aimed to investigate the correlation between these two methods in murine CNV models induced by subretinal injection. Methods: CNV was developed in 20 C57BL6/j mice by subretinal injection of adeno-associated viral delivery of a short hairpin RNA targeting sFLT-1 (AAV.shRNA.sFLT-1, as reported previously. After 4 weeks, CNV was imaged by OCT and fluorescence angiography. The scaling factors for each dimension, x, y, and z (μm/pixel were recorded, and the corneal curvature standard was adjusted from human (7.7 to mice (1.4. The volume of each OCT image stack was calculated and then normalized by multiplying the number of voxels by the scaling factors for each dimension in Seg3D software (University of Utah Scientific Computing and Imaging Institute, available at http://www.sci.utah.edu/cibc-software/seg3d.html. Eighteen mice were prepared for choroidal flatmounts and stained by CD31. The CNV volumes were calculated using scanning laser confocal microscopy after immunohistochemistry staining. Two mice were stained by Hematoxylin and Eosin for observing the CNV morphology. Results: The CNV volume calculated using OCT was, on average, 2.6 times larger than the volume calculated using the laser confocal microscopy. The correlation statistical analysis showed OCT measuring of CNV correlated significantly with the in vitro method (R 2 =0.448, P = 0.001, n = 18. The correlation coefficient for CNV quantification using OCT and confocal microscopy was 0.693 (n = 18, P = 0.001. Conclusions: There is a fair linear correlation on CNV volumes between in vivo and in vitro methods in CNV models induced by subretinal
Directory of Open Access Journals (Sweden)
Handa H
1999-02-01
Full Text Available The aim of this study was to determine suitable image parameters and an analytical method for phase-contrast magnetic resonance imaging (PC-MRI as a means of measuring cerebral blood flow volume. This was done by constructing an experimental model and applying the results to a clinical application. The experimental model was constructed from the aorta of a bull and circulating isotonic saline. The image parameters of PC-MRI (repetition time, flip angle, matrix, velocity rate encoding, and the use of square pixels were studied with percent flow volume (the ratio of actual flow volume to measured flow volume. The most suitable image parameters for accurate blood flow measurement were as follows: repetition time, 50 msec; flip angle, 20 degrees; and a 512 x 256 matrix without square pixels. Furthermore, velocity rate encoding should be set ranging from the maximum flow velocity in the vessel to five times this value. The correction in measuring blood flow was done with the intensity of the region of interest established in the background. With these parameters for PC-MRI, percent flow volume was greater than 90%. Using the image parameters for PC-MRI and the analytical method described above, we evaluated cerebral blood flow volume in 12 patients with occlusive disease of the major cervical arteries. The results were compared with conventional xenon computed tomography. The values found with both methods showed good correlation. Thus, we concluded that PC-MRI was a noninvasive method for evaluating cerebral blood flow in patients with occlusive disease of the major cervical arteries.
Manning, Robert M.; Vyhnalek, Brian E.
2015-01-01
The values of the key atmospheric propagation parameters Ct2, Cq2, and Ctq are highly dependent upon the vertical height within the atmosphere thus making it necessary to specify profiles of these values along the atmospheric propagation path. The remote sensing method suggested and described in this work makes use of a rapidly integrating microwave profiling radiometer to capture profiles of temperature and humidity through the atmosphere. The integration times of currently available profiling radiometers are such that they are approaching the temporal intervals over which one can possibly make meaningful assessments of these key atmospheric parameters. Since these parameters are fundamental to all propagation conditions, they can be used to obtain Cn2 profiles for any frequency, including those for an optical propagation path. In this case the important performance parameters of the prevailing isoplanatic angle and Greenwood frequency can be obtained. The integration times are such that Kolmogorov turbulence theory and the Taylor frozen-flow hypothesis must be transcended. Appropriate modifications to these classical approaches are derived from first principles and an expression for the structure functions are obtained. The theory is then applied to an experimental scenario and shows very good results.
Day, Ellen; Betler, James; Parda, David; Reitz, Bodo; Kirichenko, Alexander; Mohammadi, Seyed; Miften, Moyed
2009-10-01
The application of automated segmentation methods for tumor delineation on 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images presents an opportunity to reduce the interobserver variability in radiotherapy (RT) treatment planning. In this work, three segmentation methods were evaluated and compared for rectal and anal cancer patients: (i) Percentage of the maximum standardized uptake value (SUV% max), (ii) fixed SUV cutoff of 2.5 (SUV2.5), and (iii) mathematical technique based on a confidence connected region growing (CCRG) method. A phantom study was performed to determine the SUV% max threshold value and found to be 43%, SUV43% max. The CCRG method is an iterative scheme that relies on the use of statistics from a specified region in the tumor. The scheme is initialized by a subregion of pixels surrounding the maximum intensity pixel. The mean and standard deviation of this region are measured and the pixels connected to the region are included or not based on the criterion that they are greater than a value derived from the mean and standard deviation. The mean and standard deviation of this new region are then measured and the process repeats. FDG-PET-CT imaging studies for 18 patients who received RT were used to evaluate the segmentation methods. A PET avid (PETavid) region was manually segmented for each patient and the volume was then used to compare the calculated volumes along with the absolute mean difference and range for all methods. For the SUV43% max method, the volumes were always smaller than the PETavid volume by a mean of 56% and a range of 21%-79%. The volumes from the SUV2.5 method were either smaller or larger than the PETavid volume by a mean of 37% and a range of 2%-130%. The CCRG approach provided the best results with a mean difference of 9% and a range of 1%-27%. Results show that the CCRG technique can be used in the segmentation of tumor volumes on FDG-PET images, thus providing treatment planners with a clinically
Energy-preserving finite volume element method for the improved Boussinesq equation
Wang, Quanxiang; Zhang, Zhiyue; Zhang, Xinhua; Zhu, Quanyong
2014-08-01
In this paper, we design an energy-preserving finite volume element scheme for solving the initial boundary problems of the improved Boussinesq equation. Theoretical analysis shows that the proposed numerical schemes can conserve the energy and mass. Numerical experiments are performed to illustrate the efficiency of the scheme and theoretical analysis. While the results demonstrate that the proposed finite volume element scheme is second-order accuracy in space and time. Moreover, the new scheme can conserve mass and energy.
Niyazi Acer; Ahmet Turan Ilıca; Ahmet Tuncay Turgut; Özlem Özçelik; Birdal Yıldırım; Mehmet Turgut
2012-01-01
Pineal gland is a very important neuroendocrine organ with many physiological functions such as regulating circadian rhythm. Radiologically, the pineal gland volume is clinically important because it is usually difficult to distinguish small pineal tumors via magnetic resonance imaging (MRI). Although many studies have estimated the pineal gland volume using different techniques, to the best of our knowledge, there has so far been no stereological work done on this subject. The objective of t...
Directory of Open Access Journals (Sweden)
Dachao Li
2014-04-01
Full Text Available It is difficult to accurately measure the volume of transdermally extracted interstitial fluid (ISF, which is important for improving blood glucose prediction accuracy. Skin resistance, which is a good indicator of skin permeability, can be used to determine the volume of extracted ISF. However, it is a challenge to realize in vivo longitudinal skin resistance measurements of microareas. In this study, a three-electrode sensor was presented for measuring single-point skin resistance in vivo, and a method for determining the volume of transdermally extracted ISF using this sensor was proposed. Skin resistance was measured under static and dynamic conditions. The correlation between the skin resistance and the permeation rate of transdermally extracted ISF was proven. The volume of transdermally extracted ISF was determined using skin resistance. Factors affecting the volume prediction accuracy of transdermally extracted ISF were discussed. This method is expected to improve the accuracy of blood glucose prediction, and is of great significance for the clinical application of minimally invasive blood glucose measurement.
Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.
2016-02-01
The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.
Energy Technology Data Exchange (ETDEWEB)
Barajas-Solano, David A.; Tartakovsky, A. M.
2016-10-13
We present a hybrid scheme for the coupling of macro and microscale continuum models for reactive contaminant transport in fractured and porous media. The transport model considered is the advection-dispersion equation, subject to linear heterogeneous reactive boundary conditions. The Multiscale Finite Volume method (MsFV) is employed to define an approximation to the microscale concentration field defined in terms of macroscopic or \\emph{global} degrees of freedom, together with local interpolator and corrector functions capturing microscopic spatial variability. The macroscopic mass balance relations for the MsFV global degrees of freedom are coupled with the macroscopic model, resulting in a global problem for the simultaneous time-stepping of all macroscopic degrees of freedom throughout the domain. In order to perform the hybrid coupling, the micro and macroscale models are applied over overlapping subdomains of the simulation domain, with the overlap denoted as the handshake subdomain $\\Omega^{hs}$, over which continuity of concentration and transport fluxes between models is enforced. Continuity of concentration is enforced by posing a restriction relation between models over $\\Omega^{hs}$. Continuity of fluxes is enforced by prolongating the macroscopic model fluxes across the boundary of $\\Omega^{hs}$ to microscopic resolution. The microscopic interpolator and corrector functions are solutions to local microscopic advection-diffusion problems decoupled from the global degrees of freedom and from each other by virtue of the MsFV decoupling ansatz. The error introduced by the decoupling ansatz is reduced iteratively by the preconditioned GMRES algorithm, with the hybrid MsFV operator serving as the preconditioner.
Kashefiolasl, Sepide; Foerch, Christian; Pfeilschifter, Waltraud
2013-02-15
Intracerebral hemorrhage (ICH) accounts for 10% of all strokes and has a significantly higher mortality than cerebral ischemia. For decades, ICH has been neglected by experimental stroke researchers. Recently, however, clinical trials on acute blood pressure lowering or hyperacute supplementation of coagulation factors in ICH have spurred an interest to also design and improve translational animal models of spontaneous and anticoagulant-associated ICH. Hematoma volume is a substantial outcome parameter of most experimental ICH studies. We present graphite furnace atomic absorption spectrophotometric analysis (AAS) as a suitable method to precisely quantify hematoma volumes in rodent models of ICH. Copyright © 2012 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Carreira, M.
1965-07-01
As a working method for determination of changes in molecular mass that may occur by irradiation (pyrolytic-radiolytic decomposition) of polyphenyl reactor coolants, a cryoscopic technique has been developed which associated the basic simplicity of Beckman's method with some experimental refinements taken out of the equilibrium methods. A total of 18 runs were made on samples of napthalene, biphenyl, and the commercial mixtures OM-2 (Progil) and Santowax-R (Monsanto), with an average deviation from the theoretical molecular mass of 0.6%. (Author) 7 refs.
Barbu, Ioana; Herzet, Cédric
2016-10-01
We adapt and import into the TomoPIV scenery a fast algorithm for solving the volume reconstruction problem. Our approach is based on the reformulation of the volume reconstruction task as a constrained optimization problem and the resort to the ‘alternating directions method of multipliers’ (ADMM). The inherent primal-dual algorithm is summarized in this article to solve the optimization problem related to the TomoPIV. In particular, the general formulation of the volume reconstruction problem considered in this paper allows one to: (i) take explicitly into account the level of the noise affecting the data; (ii) account for both the nonnegativity and the sparsity of the solution. Experiments on a numerical TomoPIV benchmark show that the proposed framework is a serious contender for the state-of-the-art.
Institute of Scientific and Technical Information of China (English)
李日; 王健; 周黎明; 潘红
2014-01-01
采用欧拉方法和体积平均思想，建立了以液相为主相、等轴晶和柱状晶视为两类不同第二相的三相模型，耦合凝固过程质量、动量、能量、溶质的守恒方程和晶粒的传输方程.以Al-4.7 wt.%Cu二元合金铸锭为例，模拟了合金铸锭二维的流场、温度场、溶质场、柱状晶向等轴晶转变过程以及等轴晶的沉积过程，并将模拟的铸锭组织和偏析结果与实验所得结果对比.温度场、流场和组织的模拟结果与理论基本一致，但由于模型没有考虑收缩以及浇注时的强迫对流，导致铸锭外层的偏析模拟值比实测值低，内层的模拟值比实测值高.所以收缩和逆偏析在模拟中是不可忽略的，这也是本文模型的改进方向.另外在所得模拟结果的基础上分析了体积平均法计算铸锭凝固过程的优点和不足之处.%Adopting the Euler and the volume averaging methods, a three-phase mathematical model with parent melt as the primary phase, columnar dendrites and equiaxed grains as two different secondary phases is developed, and the coupled macroscopic mass, momentum, energy and species conservation equations are obtained separately. Taking the Al-4.7 wt%Cu binary alloy ingots for example, the flow field, temperature field, solute field, columnar-to-equiaxed-transition and grain sedimentation in two-dimension are simulated, and the simulated result of ingot and macrosegregation result are compared with their experimental values. The simulation results of temperature field, flow field and structure are basically consistent with the theoretical results, but the result of solute field shows that the simulated values is lower than the measured value on the edge, this is because the model does not take the shrinkage and forced convection into account, and the inner results is higher than the results on edge. The shrinkage and inverse segregation therefore should not be neglected. This model are still
Sifounakis, Adamandios; Lee, Sangseung; You, Donghyun
2016-12-01
A second-order-accurate finite-volume method is developed for the solution of incompressible Navier-Stokes equations on locally refined nested Cartesian grids. Numerical accuracy and stability on locally refined nested Cartesian grids are achieved using a finite-volume discretization of the incompressible Navier-Stokes equations based on higher-order conservation principles - i.e., in addition to mass and momentum conservation, kinetic energy conservation in the inviscid limit is used to guide the selection of the discrete operators and solution algorithms. Hanging nodes at the interface are virtually slanted to improve the pressure-velocity projection, while the other parts of the grid maintain an orthogonal Cartesian grid topology. The present method is straight-forward to implement and shows superior conservation of mass, momentum, and kinetic energy compared to the conventional methods employing interpolation at the interface between coarse and fine grids.
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You;
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...
Jaffrin, Michel Y; Morel, Hélène
2008-12-01
This paper reviews various bioimpedance methods permitting to measure non-invasively, extracellular, intracellular and total body water (TBW) and compares BIA methods based on empirical equations of the wrist-ankle resistance or impedance at 50 kHz, height and weight with BIS methods which rely on an electrical model of tissues and resistances measured at zero and infinite frequencies. In order to compare these methods, impedance measurements were made with a multifrequency Xitron 4200 impedance meter on 57 healthy subjects which had undergone simultaneously a Dual X-ray absorptiometry examination (DXA), in order to estimate their TBW from their fat-free-mass. Extracellular (ECW) and TBW volumes were calculated for these subjects using the original BIS method and modifications of Matthie[Matthie JR. Second generation mixture theory equation for estimating intracellular water using bioimpedance spectroscopy. J Appl Physiol 2005;99:780-1], Jaffrin et al. [Jaffrin MY, Fenech M, Moreno MV, Kieffer R. Total body water measurement by a modification of the bioimpédance spectroscopy method. Med Bio Eng Comput 2006;44:873-82], Moissl et al. [Moissl UM, Wabel P, Chamney PW, Bosaeus I, Levin NW, et al. Body fluid volume determination via body composition spectroscopy in health and disease. Physiol Meas 2006;27:921-33] and their TBW resistivities were compared and discussed. ECW volumes were calculated by BIA methods of Sergi et al. [Sergi G, Bussolotto M, Perini P, Calliari I, et al. Accuracy of bioelectrical bioimpedance analysis for the assessment of extracellular space in healthy subjects and in fluid retention states. Ann Nutr Metab 1994;38(3):158-65] and Hannan et al. [Hannan WJ, Cowen SJ, Fearon KC, Plester CE, Falconer JS, Richardson RA. Evaluation of multi-frequency bio-impedance analysis for the assessment of extracellular and total body water in surgical patients. Clin Sci 1994;86:479-85] and TBW volumes by BIA methods of Kushner and Schoeller [Kushner RF
How To Use Qualitative Methods in Evaluation. CSE Program Evaluation Kit, Volume 4. Second Edition.
Patton, Michael Quinn
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, the fourth in the kit, explains the basic assumptions underlying qualitative procedures, suggests evaluation situations where qualitative designs are useful, and provides guidelines for designing qualitative…
Laleg-Kirati, Taous-Meriem; Papelier, Yves; Cottin, François; Van De Louw, Andry
2009-01-01
This paper proposes a novel, simple and minimally invasive method for stroke volume variation assessment using arterial blood pressure measurements. The arterial blood pressure signal is reconstructed using a semi-classical signal analysis method allowing the computation of a parameter, called the first systolic invariant INVS1. We show that INVS1 is linearly related to stroke volume. To validate this approach, a statistical comparaison between INVS1 and stroke volume measured with the PiCCO technique was performed during a 15-mn recording in 21 mechanically ventilated patients in intensive care. In 94% of the whole recordings, a strong correlation was estimated by cross-correlation analysis (mean coefficient=0.9) and linear regression (mean coefficient=0.89). Once the linear relation had been verified, a Bland-Altman test showed the very good agreement between the two approaches and their interchangeability. For the remaining 6%, INVS1 and the PiCCO stroke volume were not correlated at all, and this discrepa...
Directory of Open Access Journals (Sweden)
Nelson H. T. Lemes
2010-01-01
Full Text Available Analytical solutions of a cubic equation with real coefficients are established using the Cardano method. The method is first applied to simple third order equation. Calculation of volume in the van der Waals equation of state is afterwards established. These results are exemplified to calculate the volumes below and above critical temperatures. Analytical and numerical values for the compressibility factor are presented as a function of the pressure. As a final example, coexistence volumes in the liquid-vapor equilibrium are calculated. The Cardano approach is very simple to apply, requiring only elementary operations, indicating an attractive method to be used in teaching elementary thermodynamics.
Institute of Scientific and Technical Information of China (English)
LI XiangYang; WANG YueFa; YU GengZhi; YANG Chao; MAO ZaiSha
2008-01-01
A volume-amending method is developed both to keep the level set function as an algebraic distance function and to preserve the bubble mass in a level set approach for incompressible two-phase flows with the significantly deformed free interface. After the traditional reinitialization procedure, a vol-ume-amending method is added for correcting the position of the interface according to mass loss/gain error until the mass error falls in the allowable range designated in advance. The level set approach with this volume-amending method incorporated has been validated by three test cases: the motion of a single axisymmetrical bubble or drop in liquid, the motion of a two-dimensional water drop falling through the air into a water pool, and the interactional motion of two buoyancy-driven three-dimensional deformable bubbles. The computational results with this volume-amending method in-corporated are in good agreement with the reported experimental data and the mass is well preserved in all cases.
Institute of Scientific and Technical Information of China (English)
2008-01-01
A volume-amending method is developed both to keep the level set function as an algebraic distance function and to preserve the bubble mass in a level set approach for incompressible two-phase flows with the significantly deformed free interface. After the traditional reinitialization procedure, a vol-ume-amending method is added for correcting the position of the interface according to mass loss/gain error until the mass error falls in the allowable range designated in advance. The level set approach with this volume-amending method incorporated has been validated by three test cases: the motion of a single axisymmetrical bubble or drop in liquid, the motion of a two-dimensional water drop falling through the air into a water pool, and the interactional motion of two buoyancy-driven three- dimensional deformable bubbles. The computational results with this volume-amending method in-corporated are in good agreement with the reported experimental data and the mass is well preserved in all cases.
Institute of Scientific and Technical Information of China (English)
Wei Gao; Ru-Xun Liu; Hong Li
2012-01-01
This paper proposes a hybrid vertex-centered finite volume/finite element method for sol ution of the two dimensional (2D) incompressible Navier-Stokes equations on unstructured grids.An incremental pressure fractional step method is adopted to handle the velocity-pressure coupling.The velocity and the pressure are collocated at the node of the vertex-centered control volume which is formed by joining the centroid of cells sharing the common vertex.For the temporal integration of the momentum equations,an implicit second-order scheme is utilized to enhance the computational stability and eliminate the time step limit due to the diffusion term.The momentum equations are discretized by the vertex-centered finite volume method (FVM) and the pressure Poisson equation is solved by the Galerkin finite element method (FEM).The momentum interpolation is used to damp out the spurious pressure wiggles.The test case with analytical solutions demonstrates second-order accuracy of the current hybrid scheme in time and space for both velocity and pressure.The classic test cases,the lid-driven cavity flow,the skew cavity flow and the backward-facing step flow,show that numerical results are in good agreement with the published benchmark solutions.
Kou, Jisheng
2017-06-09
In this paper, a new three-field weak formulation for Stokes problems is developed, and from this, a dual-mixed finite element method is proposed on a rectangular mesh. In the proposed mixed methods, the components of stress tensor are approximated by piecewise constant functions or Q1 functions, while the velocity and pressure are discretized by the lowest-order Raviart-Thomas element and the piecewise constant functions, respectively. Using quadrature rules, we demonstrate that this scheme can be reduced into a finite volume method on staggered grid, which is extensively used in computational fluid mechanics and engineering.
Energy Technology Data Exchange (ETDEWEB)
Johnson, J.
1995-12-31
A preliminary study of a new method for determining respirable mass concentration is described. This method uses a high volume air sampler and subsequent fractionation of the collected mass using a particle sedimentation technique. Side-by-side comparisons of this method with cyclones were made in the field and in the laboratory. There was good agreement among the samplers in the laboratory, but poor agreement in the field. The effect of wind on the samplers` capture efficiencies is the primary hypothesized source of error among the field results. The field test took place at the construction site of a hazardous waste landfill located on the Hanford Reservation.
Valori, Gherardo; Pariat, Etienne; Anfinogentov, Sergey; Chen, Feng; Georgoulis, Manolis K.; Guo, Yang; Liu, Yang; Moraitis, Kostas; Thalmann, Julia K.; Yang, Shangbin
2016-11-01
Magnetic helicity is a conserved quantity of ideal magneto-hydrodynamics characterized by an inverse turbulent cascade. Accordingly, it is often invoked as one of the basic physical quantities driving the generation and structuring of magnetic fields in a variety of astrophysical and laboratory plasmas. We provide here the first systematic comparison of six existing methods for the estimation of the helicity of magnetic fields known in a finite volume. All such methods are reviewed, benchmarked, and compared with each other, and specifically tested for accuracy and sensitivity to errors. To that purpose, we consider four groups of numerical tests, ranging from solutions of the three-dimensional, force-free equilibrium, to magneto-hydrodynamical numerical simulations. Almost all methods are found to produce the same value of magnetic helicity within few percent in all tests. In the more solar-relevant and realistic of the tests employed here, the simulation of an eruptive flux rope, the spread in the computed values obtained by all but one method is only 3 %, indicating the reliability and mutual consistency of such methods in appropriate parameter ranges. However, methods show differences in the sensitivity to numerical resolution and to errors in the solenoidal property of the input fields. In addition to finite volume methods, we also briefly discuss a method that estimates helicity from the field lines' twist, and one that exploits the field's value at one boundary and a coronal minimal connectivity instead of a pre-defined three-dimensional magnetic-field solution.