Cosmological Measures without Volume Weighting
Page, Don N
2008-01-01
Many cosmologists (myself included) have advocated volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence would be a loss of the argument for eternal inflation.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Average weighted receiving time in recursive weighted Koch networks
Indian Academy of Sciences (India)
DAI MEIFENG; YE DANDAN; LI XINGYI; HOU JIE
2016-06-01
Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created by the recursive division method. As a fundamental dynamical process, random walks have received considerable interest in the scientific community. Then, we study the recursive weighted Koch networks on random walk i.e., the walker, at each step, starting from its current node, moves uniformly to any of itsneighbours. In order to study the model more conveniently, we use recursive division method again to calculate the sum of the mean weighted first-passing times for all nodes to absorption at the trap located in the merging node. It is showed that in a large network, the average weighted receiving time grows sublinearly with the network order.
Distributed Weighted Parameter Averaging for SVM Training on Big Data
Das, Ayan; Bhattacharya, Sourangshu
2015-01-01
Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...
De Luca, G.; Magnus, J.R.
2011-01-01
This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Enhancing the performance of exponentially weighted moving average charts: discussion
Abbas, N.; Riaz, M.; Does, R.J.M.M.
2015-01-01
Abbas et al. (Abbas N, Riaz M, Does RJMM. Enhancing the performance of EWMA charts. Quality and Reliability Engineering International 2011; 27(6):821-833) proposed the use of signaling schemes with exponentially weighted moving average charts (named as 2/2 and modified − 2/3 schemes) for their impro
Small Bandwidth Asymptotics for Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...
ORDERED WEIGHTED AVERAGING AGGREGATION METHOD FOR PORTFOLIO SELECTION
Institute of Scientific and Technical Information of China (English)
LIU Shancun; QIU Wanhua
2004-01-01
Portfolio management is a typical decision making problem under incomplete,sometimes unknown, informationThis paper considers the portfolio selection problemsunder a general setting of uncertain states without probabilityThe investor's preferenceis based on his optimum degree about the nature, and his attitude can be described by anOrdered Weighted Averaging Aggregation functionWe construct the OWA portfolio selec-tion model, which is a nonlinear programming problemThe problem can be equivalentlytransformed into a mixed integer linear programmingA numerical example is given andthe solutions imply that the investor's strategies depend not only on his optimum degreebut also on his preference weight vectorThe general game-theoretical portfolio selectionmethod, max-min method and competitive ratio method are all the special settings of thismodel.
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
Average weighted trapping time of the node- and edge- weighted fractal networks
Dai, Meifeng; Ye, Dandan; Hou, Jie; Xi, Lifeng; Su, Weiyi
2016-10-01
In this paper, we study the trapping problem in the node- and edge- weighted fractal networks with the underlying geometries, focusing on a particular case with a perfect trap located at the central node. We derive the exact analytic formulas of the average weighted trapping time (AWTT), the average of node-to-trap mean weighted first-passage time over the whole networks, in terms of the network size Ng, the number of copies s, the node-weight factor w and the edge-weight factor r. The obtained result displays that in the large network, the AWTT grows as a power-law function of the network size Ng with the exponent, represented by θ(s , r , w) =logs(srw2) when srw2 ≠ 1. Especially when srw2 = 1 , AWTT grows with increasing order Ng as log Ng. This also means that the efficiency of the trapping process depend on three main parameters: the number of copies s > 1, node-weight factor 0 < w ≤ 1, and edge-weight factor 0 < r ≤ 1. The smaller the value of srw2 is, the more efficient the trapping process is.
47 CFR 65.305 - Calculation of the weighted average cost of capital.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation of the weighted average cost of... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost...
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-12-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is.
DEFF Research Database (Denmark)
Larsen, Henrik Legind
2009-01-01
Weighted averaging aggregation plays a key role in utilizations of electronic data and information resources for retrieving, fusing, and extracting information and knowledge, as needed for decision making. Of particular interest for such utilizations are the weighted averaging aggregation operato...
2011-02-01
... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate in... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...
Replication Regulates Volume Weighting in Quantum Cosmology
Hartle, James
2009-01-01
Probabilities for observations in cosmology are conditioned both on the universe's quantum state and on local data specifying the observational situation. We show the quantum state defines a measure for prediction through such conditional probabilities that is well behaved for spatially large or infinite universes when the probabilities that our data is replicated are taken into account. In histories where our data are rare volume weighting connects top-down probabilities conditioned on both the data and the quantum state to the bottom-up probabilities conditioned on the quantum state alone. We apply these principles to a calculation of the number of inflationary e-folds in a homogeneous, isotropic minisuperspace model with a single scalar field moving in a quadratic potential. We find that volume weighting is justified and the top-down probabilities favor a large number of e-folds.
2010-12-28
... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... comments regarding the calculation of the weighted average dumping margin and antidumping duty assessment...-specific export prices and average normal values and does not offset any dumping that is found with...
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
2011-03-14
... weight rating (GAWR). Instead, buses were loaded to the maximum weight rating and a notation was made in... scientific data. FTA's earlier selection of the 150 pound passenger weight assumption was based on the number... modern scientific data, and provides flexibility and freedom of choice for the affected entities. The bus...
DEFF Research Database (Denmark)
Marani, Debora; Sudireddy, Bhaskar Reddy; Kiebach, Ragnar
characterized regarding their viscosimetric properties in ethanol. Average molecular weights (Mw, Mn, and Mz) have been determined by gel permeation chromatography (GPC), and then used in a numerical method to evaluate the viscosity average molecular weight (Mv) via the Mark-Houwink-Sakurada (MHS) equation...
DEFF Research Database (Denmark)
Larsen, Henrik Legind
2009-01-01
Weighted averaging aggregation plays a key role in utilizations of electronic data and information resources for retrieving, fusing, and extracting information and knowledge, as needed for decision making. Of particular interest for such utilizations are the weighted averaging aggregation operators....... Two central issues in the choice of such operators are the kind of importance weighting and the andness (or conjunction degree) of the operator. We present and discuss two main kinds of importance weighting, namely multiplicative and implicative, and propose schemes for their application with two...... classes of averaging operators, namely the Power Means and the Ordered Weighted Averaging operators, each in a De Morgan dual form for increased efficacy. For each class is proposed a function for a rather accurate direct control of the andness. Operators with the same kind of importance weighting appear...
Tjionas, George A; Epstein, Jonathan I; Williamson, Sean R; Diaz, Mireya; Menon, Mani; Peabody, James O; Gupta, Nilesh S; Parekh, Dipen J; Cote, Richard J; Jorda, Merce; Kryvenko, Oleksandr N
2015-12-01
The International Society of Urological Pathology in 2010 recommended weighing prostates without seminal vesicles (SV) to include only prostate weight in prostate-specific antigen (PSA) density (PSAD) calculation, because SV do not produce PSA. Large retrospective cohorts exist with combined weight recorded that needs to be modified for retrospective analysis. Weights of prostates and SV were separately recorded in 172 consecutive prostatectomies. The average weight of SV and proportion of prostate weight from combined weight were calculated. The adjustment factors were then validated on databases of 2 other institutions. The average weight of bilateral SV was 6.4 g (range = 1-17.3 g). The prostate constituted on average 87% (range = 66% to 98%) of the total specimen weight. There was no correlation between patient age and prostate weight with SV weight. The best performing correction method was to subtract 6.4 g from total radical prostatectomy weight and to use this weight for PSAD calculation. The average weights of retrospective specimens weighed with SV were not significantly different between the 3 institutions. Using our data allowed calibration of the weights and PSAD between the cohorts weighed with and without SV. Thus, prostate weight in specimens including SV weight can be adjusted by subtracting 6.4 g, resulting in significant change of PSAD. Some institution-specific variations may exist, which could further increase the precision of retrospective analysis involving prostate weight and PSAD. However, unless institution-specific adjustment parameters are developed, we recommend that this correction factor be used for retrospective cohorts or in institutions where combined weight is still recorded.
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
2012-12-14
... passenger weight estimations then underway by the Federal Aviation Administration and the United States... Moving Ahead for Progress in the 21st Century Act (MAP-21) (Pub. L. 112-141). Section 20014 of...
Directory of Open Access Journals (Sweden)
R. PURUSHOTHAMAN NAIR
2011-02-01
Full Text Available In this paper a set of normalized weighted averages which may be called as bi-average, tri-average, quadric-average or in general kth poly average, k=2,3,4,… is introduced. The weights can be easily assigned using the integer k. The linear combination of the weights with the samples is biased to latest samples of a given discrete data set when the samples are considered chronologically or sequentially. Hence these averages can generate moving and realistic trends of data without being a moving average. Computations of these averages are not explicitly depending on the size of the data set and can be done in a progressive way. The advantage is that it is not necessary to store the data samples or its size for computing these averages. An inferring mechanism is derived based on which one can easily decide whether current sample is continuous or not with previous samples based on the computed average. Illustrative examples are presented to establish the effectiveness of this inferring mechanism in testing continuous trends and filtering of discontinuous samples of flight telemetry data of a typical launch vehicle and that of sample data sets of standard continuous signals. Mathematical properties ofthese averages are discussed.
Benkler, Erik; Sterr, Uwe
2015-01-01
The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...
Volume calculation of the spur gear billet for cold precision forging with average circle method
Institute of Scientific and Technical Information of China (English)
Wangjun Cheng; Chengzhong Chi; Yongzhen Wang; Peng Lin; Wei Liang; Chen Li
2014-01-01
Forging spur gears are widely used in the driving system of mining machinery and equipment due to their higher strength and dimensional accuracy. For the purpose of precisely calculating the volume of cylindrical spur gear billet in cold precision forging, a new theoretical method named average circle method was put forward. With this method, a series of gear billet volumes were calculated. Comparing with the accurate three-dimensional modeling method, the accuracy of average circle method by theoretical calculation was estimated and the maximum relative error of average circle method was less than 1.5%, which was in good agreement with the experimental results. Relative errors of the calculated and the experimental for obtaining the gear billet volumes with reference circle method are larger than those of the average circle method. It shows that average circle method possesses a higher calculation accuracy than reference circle method (traditional method), which should be worth popularizing widely in calculation of spur gear billet volume.
Simulating thermal boundary conditions of spin-lattice models with weighted averages
Wang, Wenlong
2016-07-01
Thermal boundary conditions have played an increasingly important role in revealing the nature of short-range spin glasses and is likely to be relevant also for other disordered systems. Diffusion method initializing each replica with a random boundary condition at the infinite temperature using population annealing has been used in recent large-scale simulations. However, the efficiency of this method can be greatly suppressed because of temperature chaos. For example, most samples have some boundary conditions that are completely eliminated from the population in the process of annealing at low temperatures. In this work, I study a weighted average method to solve this problem by simulating each boundary conditions separately and collect data using weighted averages. The efficiency of the two methods is studied using both population annealing and parallel tempering, showing that the weighted average method is more efficient and accurate.
Jafarizadeh, Saber
2010-01-01
Solving fastest distributed consensus averaging problem over networks with different topologies has been an active area of research for a number of years. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only communicate with their neighbors. In the previous works full knowledge about the network's topology was required for finding optimal weights and convergence rate of network, but here in this work for the first time the optimal weights are determined analytically for the edges of certain types of branches, namely path branch, lollipop branch, semi-complete Branch and Ladder branch independent of the rest of network. The solution procedure consists of stratification of associated connectivity graph of branch and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness c...
Children’s Attitudes and Stereotype Content Toward Thin, Average-Weight, and Overweight Peers
Directory of Open Access Journals (Sweden)
Federica Durante
2014-05-01
Full Text Available Six- to 11-year-old children’s attitudes toward thin, average-weight, and overweight targets were investigated with associated warmth and competence stereotypes. The results showed positive attitudes toward average-weight targets and negative attitudes toward overweight peers: Both attitudes decreased as a function of children’s age. Thin targets were perceived more positively than overweight ones but less positively than average-weight targets. Notably, social desirability concerns predicted the decline of anti-fat bias in older children. Finally, the results showed ambivalent stereotypes toward thin and overweight targets—particularly among older children—mirroring the stereotypes observed in adults. This result suggests that by the end of elementary school, children manage the two fundamental dimensions of social judgment similar to adults.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Analysis of litter size and average litter weight in pigs using a recursive model
DEFF Research Database (Denmark)
Varona, Luis; Sorensen, Daniel; Thompson, Robin
2007-01-01
An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size...
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
Proton transport properties of poly(aspartic acid) with different average molecular weights
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)
2011-04-15
Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.
12 CFR 702.105 - Weighted-average life of investments.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Weighted-average life of investments. 702.105 Section 702.105 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS...) and 702.107(c) is defined pursuant to § 702.2(m): (a) Registered investment companies and collective...
Computational complexity of some maximum average weight problems with precedence constraints
Faigle, Ulrich; Kern, Walter
1994-01-01
Maximum average weight ideal problems in ordered sets arise from modeling variants of the investment problem and, in particular, learning problems in the context of concepts with tree-structured attributes in artificial intelligence. Similarly, trying to construct tests with high reliability leads t
KRIJNEN, WP
1994-01-01
De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this
KRIJNEN, WP
De Vries (1993) discusses Pearson's product-moment correlation, Spearman's rank correlation, and Kendall's rank-correlation coefficient for assessing the association between the rows of two proximity matrices. For each of these he introduces a weighted average variant and a rowwise variant. In this
Directory of Open Access Journals (Sweden)
Syromyatnikov D. A.
2013-12-01
Full Text Available The article presents an analysis of the dynamics of the weighted average customs fare on the competitiveness of economic subjects. The characteristic of the current state of competitiveness of Russian companies after joining the World Trade Organization is given
EURO-USD PREDICTION APPLICATION USING WEIGHTED MOVING AVERAGE ON MOBILE DEVICE
Directory of Open Access Journals (Sweden)
Afan Galih Salman
2014-01-01
Full Text Available Investments in foreign exchange (forex promise lucrative profits, thus inviting a lot of attention for researcher sand traders to create a system or indicator in trading. All indicators or system is reliable and has proven hat can bring profit for traders. Basically all indicator are reliable and tested which able to bring some profit to traders. Ironically there are many trader fail to gain the profit and became bankrupt. It because they has no well money management and good mentality in trading. Therefore in this study is focused on technical analysis by using weighted moving average which will be implemented on the mobile device so that it can give predictions on the price of the EURO-USD currency pair. The results is the weighted moving average was not quite accurate in determining the price of a currency especially during sideways price but it so accurate when they have strong price trend or large-scale. weighted moving average becomes really easy to apply when using 2 or more weighted moving average and able to give facility in analyzing movement of currency with the counterpart of EURO-USD by means of mobile medium.
Robust Data-Driven Inference for Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper presents a new data-driven bandwidth selector compatible with the small bandwidth asymptotics developed in Cattaneo, Crump, and Jansson (2009) for density- weighted average derivatives. The new bandwidth selector is of the plug-in variety, and is obtained based on a mean squared error...
[Weighted-averaging multi-planar reconstruction method for multi-detector row computed tomography].
Aizawa, Mitsuhiro; Nishikawa, Keiichi; Sasaki, Keita; Kobayashi, Norio; Yama, Mitsuru; Sano, Tsukasa; Murakami, Shin-ichi
2012-01-01
Development of multi-detector row computed tomography (MDCT) has enabled three-dimensions (3D) scanning with minute voxels. Minute voxels improve spatial resolution of CT images. At the same time, however, they increase image noise. Multi-planar reconstruction (MPR) is one of effective 3D-image processing techniques. The conventional MPR technique can adjust slice thickness of MPR images. When a thick slice is used, the image noise is decreased. In this case, however, spatial resolution is deteriorated. In order to deal with this trade-off problem, we have developed the weighted-averaging multi-planar reconstruction (W-MPR) technique to control the balance between the spatial resolution and noise. The weighted-average is determined by the Gaussian-type weighting function. In this study, we compared the performance of W-MPR with that of conventional simple-addition-averaging MPR. As a result, we could confirm that W-MPR can decrease the image noise without significant deterioration of spatial resolution. W-MPR can adjust freely the weight for each slice by changing the shape of the weighting function. Therefore, W-MPR can allow us to select a proper balance of spatial resolution and noise and at the same time produce suitable MPR images for observation of targeted anatomical structures.
2010-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
A RED modified weighted moving average for soft real-time application
Directory of Open Access Journals (Sweden)
Domanśka Joanna
2014-09-01
Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation
A note on stereological estimation of the volume-weighted second moment of particle volume
DEFF Research Database (Denmark)
Jensen, E B; Sørensen, Flemming Brandt
1991-01-01
It is shown that for a variety of biological particle shapes, the volume-weighted second moment of particle volume can be estimated stereologically using only the areas of particle transects, which can be estimated manually by point-counting.......It is shown that for a variety of biological particle shapes, the volume-weighted second moment of particle volume can be estimated stereologically using only the areas of particle transects, which can be estimated manually by point-counting....
Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport
Directory of Open Access Journals (Sweden)
Gilberto Espinosa-Paredes
2012-01-01
Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.
Focused information criterion and model averaging based on weighted composite quantile regression
Xu, Ganggang
2013-08-13
We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..
Vauchel, Peggy; Arhaliass, Abdellah; Legrand, Jack; Kaas, Raymond; Baron, Regis
2008-01-01
Alginates are natural polysaccharides that are extracted from brown seaweeds and widely used for their rheological properties. The central step in the extraction protocol used in the alginate industry is the alkaline extraction, which requires several hours. In this study, a significant decrease in alginate dynamic viscosity was observed after 2 h of alkaline treatment. Intrinsic viscosity and average molecular weight of alginates from alkaline extractions 1-4 h in duration were determined, i...
Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast.
Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L
2015-11-15
This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. Copyright © 2015 Elsevier Inc. All rights reserved.
Modified Weighting for Calculating the Average Concentration of Non-Point Source Pollutant
Institute of Scientific and Technical Information of China (English)
牟瑞芳
2004-01-01
The concentration of runoff depends upon that of soil loss and the latter is assumed to be linear to the value of EI that equals the product of total storm energy E times the maximum 30-min intensity I30 for a given rainstorm. Usually, the maximum accumulative amount of rain for a rainstorm might bring on the maximum amount of runoff, but it does not equal the maximum erosion and not always lead the maximum concentration. Thus, the average concentration weighted by amount of runoff is somewhat unreasonable. An improvement for the calculation method of non-point source pollution load put forward by professor Li Huaien is proposed. In replacement of the weight of runoff, EI value of a single rainstorm is introduced as a new weight. An example of Fujing River watershed shows that its application is effective.
Directory of Open Access Journals (Sweden)
Yunna Wu
2017-07-01
Full Text Available We propose a new class of aggregation operator based on utility function and apply them to group decision-making problem. First of all, based on an optimal deviation model, a new operator called the interval generalized ordered weighted utility multiple averaging (IGOWUMA operator is proposed, it incorporates the risk attitude of decision-makers (DMs in the aggregation process. Some desirable properties of the IGOWUMA operator are studied afterward. Subsequently, under the hyperbolic absolute risk aversion (HARA utility function, another new operator named as interval generalized ordered weighted hyperbolic absolute risk aversion utility multiple averaging-HARA (IGOWUMA-HARA operator is also defined. Then, we discuss its families and find that it includes a wide range of aggregation operators. To determine the weights of the IGOWUMA-HARA operator, a preemptive nonlinear objective programming model is constructed, which can determine a uniform weighting vector to guarantee the uniform standard comparison between the alternatives and measure their fair competition under the condition of valid comparison between various alternatives. Moreover, a new approach for group decision-making is developed based on the IGOWUMA-HARA operator. Finally, a comparison analysis is carried out to illustrate the superiority of the proposed method and the result implies that our operator is superior to the existing operator.
Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico
Energy Technology Data Exchange (ETDEWEB)
Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx
2008-07-01
This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)
Lattice Boltzmann Model for The Volume-Averaged Navier-Stokes Equations
Zhang, Jingfeng; Ouyang, Jie
2014-01-01
A numerical method, based on discrete lattice Boltzmann equation, is presented for solving the volume-averaged Navier-Stokes equations. With a modified equilibrium distribution and an additional forcing term, the volume-averaged Navier-Stokes equations can be recovered from the lattice Boltzmann equation in the limit of small Mach number by the Chapman-Enskog analysis and Taylor expansion. Due to its advantages such as explicit solver and inherent parallelism, the method appears to be more competitive with traditional numerical techniques. Numerical simulations show that the proposed model can accurately reproduce both the linear and nonlinear drag effects of porosity in the fluid flow through porous media.
Scaling of the Average Receiving Time on a Family of Weighted Hierarchical Networks
Sun, Yu; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang
2016-08-01
In this paper, based on the un-weight hierarchical networks, a family of weighted hierarchical networks are introduced, the weight factor is denoted by r. The weighted hierarchical networks depend on the number of nodes in complete bipartite graph, denoted by n1, n2 and n = n1 + n2. Assume that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the weight of edge linking them. We deduce the analytical expression of the average receiving time (ART). The obtained remarkable results display two conditions. In the large network, when nr > n1n2, the ART grows as a power-law function of the network size |V (Gk)| with the exponent, represented by θ =logn( nr n1n2 ), 0 < θ < 1. This means that the smaller the value of θ, the more efficient the process of receiving information. When nr ≤ n1n2, the ART grows with increasing order |V (Gk)| as logn|V (Gk)| or (logn|V (Gk)|)2.
The average free volume model for the ionic and simple liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 60 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. Some typical one atom liquids (molten metals and liquid noble gases) are introduced to verify this hypothesis. Good agreement between the theory prediction and experimental data can be obtained.
Rong, Y; Sillick, M; Gregson, C M
2009-01-01
Dextrose equivalent (DE) value is the most common parameter used to characterize the molecular weight of maltodextrins. Its theoretical value is inversely proportional to number average molecular weight (M(n)), providing a theoretical basis for correlations with physical properties important to food manufacturing, such as: hygroscopicity, the glass transition temperature, and colligative properties. The use of freezing point osmometry to measure DE and M(n) was assessed. Measurements were made on a homologous series of malto-oligomers as well as a variety of commercially available maltodextrin products with DE values ranging from 5 to 18. Results on malto-oligomer samples confirmed that freezing point osmometry provided a linear response with number average molecular weight. However, noncarbohydrate species in some commercial maltodextrin products were found to be in high enough concentration to interfere appreciably with DE measurement. Energy dispersive spectroscopy showed that sodium and chloride were the major ions present in most commercial samples. Osmolality was successfully corrected using conductivity measurements to estimate ion concentrations. The conductivity correction factor appeared to be dependent on the concentration of maltodextrin. Equations were developed to calculate corrected values of DE and M(n) based on measurements of osmolality, conductivity, and maltodextrin concentration. This study builds upon previously reported results through the identification of the major interfering ions and provides an osmolality correction factor that successfully accounts for the influence of maltodextrin concentration on the conductivity measurement. The resulting technique was found to be rapid, robust, and required no reagents.
Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo
2017-03-01
Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.
Weighted Average Finite Difference Methods for Fractional Reaction-Subdiffusion Equation
Directory of Open Access Journals (Sweden)
Nasser Hassen SWEILAM
2014-04-01
Full Text Available In this article, a numerical study for fractional reaction-subdiffusion equations is introduced using a class of finite difference methods. These methods are extensions of the weighted average methods for ordinary (non-fractional reaction-subdiffusion equations. A stability analysis of the proposed methods is given by a recently proposed procedure similar to the standard John von Neumann stability analysis. Simple and accurate stability criterion valid for different discretization schemes of the fractional derivative, arbitrary weight factor, and arbitrary order of the fractional derivative, are given and checked numerically. Numerical test examples, figures, and comparisons have been presented for clarity.doi:10.14456/WJST.2014.50
Average projection type weighted Cramér-von Mises statistics for testing some distributions
Institute of Scientific and Technical Information of China (English)
CUI; Hengjian(崔恒建)
2002-01-01
This paper addresses the problem of testing goodness-of-fit for several important multivariate distributions: (Ⅰ) Uniform distribution on p-dimensional unit sphere; (Ⅱ) multivariate standard normal distribution; and (Ⅲ) multivariate normal distribution with unknown mean vector and covariance matrix. The average projection type weighted Cramér-yon Mises test statistic as well as estimated and weighted Cramér-von Mises statistics for testing distributions (Ⅰ), (Ⅱ) and (Ⅲ) are constructed via integrating projection direction on the unit sphere, and the asymptotic distributions and the expansions of those test statistics under the null hypothesis are also obtained. Furthermore, the approach of this paper can be applied to testing goodness-of-fit for elliptically contoured distributions.
Influence of Solvent Conditons on Average Relative Molecular Weight of Polyoctadecyl Acrylate
Institute of Scientific and Technical Information of China (English)
JiangQingzhe; SongZhaozheng; KeMing; ZhaoMifu
2005-01-01
Polymerization of octodecyl acrylate is studied in four solvents -- carbon tetrachloride, chloroform,methylbenzene and tetrachloroethane. Experimental results indicate that the sequence of chain transfer constants in solvents is: carbon tetrachloride>chloroform>methylbenzene>tetrachloroethane in the polymerization of octadecyl acrylate. Influences of four solvents on solubility of polyoctadecyl acrylate prove not the same. In chloroform,polyoctadecyl acrylate shows the highest relative viscosity and the lowest chain termination rate constant. In higher conversion, the average relative molecular weight of polyoctadecyl acrylate depends mainly on the chain transfer constant of the solvent. Under the circumstance of monomer conversion higher than 30%, the viscosity effect induced by polymeric molecular shape in the solvents have a strong influence on the relative molecular weight of the polymer obtained.
Directory of Open Access Journals (Sweden)
Bima Anjasmoro
2016-06-01
Full Text Available The Feasibility study potential of small dams in Semarang District has identified 8 (eight urgent potential small dams. These potential dams here to be constructed within 5 (five years in order to overcome the problem of water shortage in the district. However, the government has limited funding source. It is necessary to select the more urgent small dams to be constructed within the limited budget. The purpose of the research is determining the priority of small dams construction in Semarang District. The method used to determine the priority in this study is cluster analysis, AHP and weighted average method. The criteria used to determine the priority in this study consist of: vegetation in the inundated area, volume of embankment, land acquisition area, useful storage, recervoir life time, water cost/m³, access road to the dam site, land status at abutment and inundated area, construction cost, operation and maintenance cost, irrigation service area and raw water benefit. Based on results of cluster analysis, AHP and weighted average method can be conclude that the priority of small dams construction is 1 Mluweh Small Dam (0.165, 2 Pakis Small Dam (0.142, 3 Lebak Small Dam (0.134, 4 Dadapayam Small Dam (0.128, 5 Gogodalem Small Dam (0.119, 6 Kandangan Small Dam (0.114, 7 Ngrawan Small Dam (0.102 and 8 Jatikurung Small Dam (0.096. Based on analysis of the order of priority of 3 (three method showed that method is more detail than cluster analysis method and weighted average method, because the result of AHP method is closer to the conditions of each dam in the field.
Institute of Scientific and Technical Information of China (English)
Igor Boglaev; Matthew Hardy
2008-01-01
This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type.To solve the nonlinear weighted average finite difference scheme for the partial differential equation,we construct a monotone domain decomposition algorithm based on a Schwarz alternating method and a box-domain decomposition.This algorithm needs only to solve linear discrete systems at each iterative step and converges monotonically to the exact solution of the nonlinear discrete problem. The rate of convergence of the monotone domain decomposition algorithm is estimated.Numerical experiments are presented.
Visual Tracking Using Max-Average Pooling and Weight-Selection Strategy
Directory of Open Access Journals (Sweden)
Suguo Zhu
2014-01-01
Full Text Available Many modern visual tracking algorithms incorporate spatial pooling, max pooling, or average pooling, which is to achieve invariance to feature transformations and better robustness to occlusion, illumination change, and position variation. In this paper, max-average pooling method and Weight-selection strategy are proposed with a hybrid framework, which is combined with sparse representation and particle filter, to exploit the spatial information of an object and make good compromises to ensure the correctness of the results in this framework. Challenges can be well considered by the proposed algorithm. Experimental results demonstrate the effectiveness and robustness of the proposed algorithm compared with the state-of-the-art methods on challenging sequences.
Institute of Scientific and Technical Information of China (English)
XIA Rui; ZHANG Yuan; ZHANG Meng-heng; LIU Ke-xin; WU Jie-yun; ZHENG Zhi-rong; GONG Yao
2015-01-01
Increasing incidents of indoor air quality (IAQ) related complaints lead us to the fact that IAQ has become a significant occupational health and environmental issue. However, how to effectively evaluate IAQ under different scale of multiple indicators is still a challenge. The traditional single-indicator method is subjected to uncertainties in assessing IAQ due to different subjectivity on good or bad quality and scalar differences of data set. In this study, a multilevel integrated weighted average IAQ method including initial walking through assessment (IWA) and two-layers weighted average method are developed and applied to evaluate IAQ of the laboratory building at the University of Regina in Canada. Some important chemical parameters related to IAQ in terms of volatile organic compounds (VOCs), methanol (HCHO), carbon dioxide (CO2), and carbon monoxide (CO) are evaluated based on 5 months continuous monitoring data. The new integrated assessment result can not only indicates the risk of an individual parameter, but also able to quantify the overall IAQ risk on the sampling site. Finally, some recommendations based on the result are proposed to address sustainable IAQ practices in the sampling area.
THE ASSESSMENT OF CORPORATE BONDS ON THE BASIS OF THE WEIGHTED AVERAGE
Directory of Open Access Journals (Sweden)
Victor V. Prokhorov
2014-01-01
Full Text Available The article considers the problem associated with the assessment of the interest rate of a public corporate bond issue. The theme of research is the study of techniques for evaluationof interest rates of corporate bond. The article discusses the task of developing a methodology for assessing the marketinterest rate of corporate bonded loan, which allows to takeinto account the systematic and speciﬁc risks. The technique of evaluation of market interest rates of corporate bonds onthe basis of weighted averages is proposed. This procedure uses in the calculation of cumulative barrier interest rate, sectoral weighted average interest rate and the interest ratedetermined on the basis of the model CAPM (Capital Asset Pricing Model. The results, which enable to speak about the possibility of applying the proposed methodology for assessing the market interest rate of a public corporate bond issuein the Russian conditions. The results may be applicable for Russian industrial enterprises, organizing issue public bonds,as well as investment companies exposed organizers of corporate securities loans and other organizations specializingin investments in the Russian public corporate bond loans.
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers
Zhou, Feng; Catton, Ivan
2012-10-01
A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.
Directory of Open Access Journals (Sweden)
Lucas Marin
2014-01-01
Full Text Available Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied.
Adaptive polarization image fusion based on regional energy dynamic weighted average
Institute of Scientific and Technical Information of China (English)
ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai
2005-01-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Detecting the start of an influenza outbreak using exponentially weighted moving average charts
Directory of Open Access Journals (Sweden)
Coory Michael
2010-06-01
Full Text Available Abstract Background Influenza viruses cause seasonal outbreaks in temperate climates, usually during winter and early spring, and are endemic in tropical climates. The severity and length of influenza outbreaks vary from year to year. Quick and reliable detection of the start of an outbreak is needed to promote public health measures. Methods We propose the use of an exponentially weighted moving average (EWMA control chart of laboratory confirmed influenza counts to detect the start and end of influenza outbreaks. Results The chart is shown to provide timely signals in an example application with seven years of data from Victoria, Australia. Conclusions The EWMA control chart could be applied in other applications to quickly detect influenza outbreaks.
Directory of Open Access Journals (Sweden)
Björn eNitzsche
2015-06-01
Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
PREPARATION OF ULTRA-LOW VOLUME WEIGHT AUTOCLAVED AERATED CONCRETE
Directory of Open Access Journals (Sweden)
Ondrej Koutny
2016-12-01
Full Text Available Autoclaved aerated concrete is a modern construction material that gains its popularity especially due to its thermal insulation performance resulting from low volume weight and porous structure with sufficient mechanical strength. Nowadays, there are attempts to use this material for thermal insulation purposes and to replace current systems, which have many disadvantages, mainly concerning durability. The key for improvement of thermal insulation properties is therefore obtaining a material based on autoclaved aerated concrete with extremely low volume weight (below 200 kg/m ³ ensuring good thermal isolation properties, but with sufficient mechanical properties to allow easy manipulation. This material can be prepared by foaming very fine powder materials such as silica fume or very finely ground sand. This paper deals with the possibilities of preparation and summarizes the basic requirements for successful preparation of such a material.
U.S. Geological Survey, Department of the Interior — This digital data release consists of seven national data files of area- and depth-weighted averages of select soil attributes for every available county in the...
Directory of Open Access Journals (Sweden)
Qiutong Jin
2016-06-01
Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.
Directory of Open Access Journals (Sweden)
Michele Scagliarini
2016-06-01
Full Text Available Exponentially weighted moving average (EWMA control charts have been successfully used in recent years in several areas of healthcare. Most of these applications have concentrated on the problem of detecting shifts in the mean level of a process. The EWMA chart for monitoring the variability has received, in general, less attention than its counterpart for the mean, although equally important and, to the best of our knowledge, it has never been used in the healthcare framework. In this work, EWMA control charts were applied retrospectively for monitoring the mean and variability of a hospital organizational performance indicator. The aim was to determine whether EWMA control charts can be used as a comprehensive approach for assessing the steady-state behaviour of the process and for early detection of changes indicating either improvement or deterioration in the performance of healthcare organizations. The results showed that the EWMA control schemes generate easy-to-read data displays that reflect process performance allowing a continuous monitoring and prompt detection of changes in process performance. Currently, hospital managers are designing an operating room dashboard which also includes the EWMA control charts.
Nitzsche, Björn; Frey, Stephen; Collins, Louis D; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H; Fonov, Vladimir S; Boltze, Johannes
2015-01-01
Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Time weighted average concentration monitoring based on thin film solid phase microextraction.
Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz
2017-03-02
Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
Measurement of average density and relative volumes in a dispersed two-phase fluid
Sreepada, Sastry R.; Rippel, Robert R.
1992-01-01
An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.
Berezhkovskii, Alexander M.; Weiss, George H.
1996-07-01
In order to extend the greatly simplified Smoluchowski model for chemical reaction rates it is necessary to incorporate many-body effects. A generalization with this feature is the so-called trapping model in which random walkers move among a uniformly distributed set of traps. The solution of this model requires consideration of the distinct number of sites visited by a single n-step random walk. A recent analysis [H. Larralde et al., Phys. Rev. A 45, 1728 (1992)] has considered a generalized version of this problem by calculating the average number of distinct sites visited by N n-step random walks. A related continuum analysis is given in [A. M. Berezhkovskii, J. Stat. Phys. 76, 1089 (1994)]. We consider a slightly different version of the general problem by calculating the average volume of the Wiener sausage generated by Brownian particles generated randomly in time. The analysis shows that two types of behavior are possible: one in which there is strong overlap between the Wiener sausages of the particles, and the second in which the particles are mainly independent of one another. Either one or both of these regimes occur, depending on the dimension.
Directory of Open Access Journals (Sweden)
H. Matsueda
2010-02-01
Full Text Available Column-averaged volume mixing ratios of carbon dioxide (XCO2 during the period from January 2007 to May 2008 over Tsukuba, Japan, were derived by using CO2 concentration data observed by Japan Airlines Corporation (JAL commercial airliners, based on the assumption that CO2 profiles over Tsukuba and Narita were the same. CO2 profile data for 493 flights on clear-sky days were analysed in order to calculate XCO2 with an ancillary dataset: Tsukuba observational data (by rawinsonde and a meteorological tower or global meteorological data (NCEP and CIRA-86. The amplitude of seasonal variation of XCO2 (Tsukuba observational from the Tsukuba observational data was determined by least-squares fit using a harmonic function to roughly evaluate the seasonal variation over Tsukuba. The highest and lowest values of the obtained fitted curve in 2007 for XCO2 (Tsukuba observational were 386.4 and 381.7 ppm in May and September, respectively. The dependence of XCO2 on the type of ancillary dataset was evaluated. The average difference between XCO2 (global from global climatological data and XCO2 (Tsukuba observational, i.e., the bias of XCO2 (global based on XCO2 (Tsukuba observational, was found to be -0.621 ppm with a standard deviation of 0.682 ppm. The uncertainty of XCO2 (global based on XCO2 (Tsukuba observational was estimated to be 0.922 ppm. This small uncertainty suggests that the present method of XCO2 calculation using data from airliners and global climatological data can be applied to the validation of GOSAT products for XCO2 over airports worldwide.
Zaman, B.; Riaz, M.; Abbas, N.; Does, R.J.M.M.
2015-01-01
Shewhart, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) charts are famous statistical tools, to handle special causes and to bring the process back in statistical control. Shewhart charts are useful to detect large shifts, whereas EWMA and CUSUM are more sensitive for smal
Energy Technology Data Exchange (ETDEWEB)
Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Apartado Postal 55-535, Mexico D.F. 09340 (Mexico)
2010-05-15
The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.
Grunau, Ruth Eckstein; Whitfield, Michael F; Davis, Cynthia
2002-06-01
To examine the prevalence and pattern of specific areas of learning disability (LD) in neurologically normal children with extremely low birth weight (ELBW) (intelligence compared with full-term children with normal birth weight of comparable sociodemographic background, and to explore concurrent cognitive correlates of the specific LDs. Longitudinal follow-up; geographically defined region. Regional follow-up program. Wechsler Intelligence Scale for Children-Revised, Gray Oral Reading Test-Revised, Test of Written Language-Revised, Wide Range Achievement Test-Revised, Developmental Test of Visual-Motor Integration. One hundred fourteen (87%) of 131 children with ELBW born between 1982 and 1987 were seen at ages 8 to 9 years. Of the 114 children, 74, who were neurologically normal, with a Verbal or Performance IQ greater than or equal to 85, formed the study group. A group of 30 full-term children with normal birth weight and similar sociodemographic status comprised a comparison group. The children were predominantly white and middle class. Significantly more children with ELBW (65%) met criteria for LD in 1 or more areas compared with 13% of the comparison children. In the ELBW group, the most frequently affected area was written output, then arithmetic, then reading. Visuospatial and visual-motor abilities in combination with verbal functioning primarily explained performance in arithmetic and reading among children with ELBW, unlike the control children, whose scores were associated only with verbal functioning. Complex LDs in multiple academic domains are common sequelae among broadly middle class, predominantly white, neurologically normal children with ELBW compared with control peers. The developmental etiology of LDs in children with ELBW and control peers differs.
Using exponentially weighted moving average algorithm to defend against DDoS attacks
CSIR Research Space (South Africa)
Machaka, P
2016-11-01
Full Text Available ) the effect, on detection-rate, of the alarm threshold α, tuning parameter; (2) the effect, on detection rate, of the EWMA weighting factor β, tuning parameter; (3) the trade-off between detection rate and the false positive rate; (4) the trade-off between... improves. It can be seen that there is a trade-off between detection rate and false positive rate. B. The effect of the EWMA factor (β) In this section we seek to investigate the effect of the value of the EWMA factor (β) on the detection rate...
Energy Technology Data Exchange (ETDEWEB)
Cosemans, G.; Kretzschmar, J. [Flemish Inst. for Technological Research (Vito), Mol (Belgium)
2004-07-01
Pollutant roses are polar diagrams that show how air pollution depends on wind direction. If an ambient air quality monitoring station is markedly influenced by a source of the pollutant measured, the pollutant rose shows a peak towards the local source. When both wind direction data and pollutant concentration are measured as (1/2)-hourly averages, the pollutant rose is mathematically well defined and the computation is simple. When the pollutant data are averages over 24 h, as is the case for heavy metals or dioxin levels or in many cases PM10-levels in ambient air, the pollutant rose is mathematically well defined, but the computational scheme is not obvious. In this paper, two practical methods to maximize the information content of pollutant roses based on 24 h pollutant concentrations are presented. These methods are applied to time series of 24 h SO{sub 2} concentrations, derived from the 1/2-hourly SO{sub 2} concentrations measured in the Antwerp harbour, industrial, urban and rural regions by the Telemetric Air Quality Monitoring Network of the Flemish Environmental Agency (VMM). The pollutant roses computed from the 1/2-hourly SO{sub 2} concentrations constitute reference or control-roses to evaluate the representativeness or truthfulness of pollutant roses obtained by the presented methods. The presented methodology is very useful in model validations that have to be based on measured daily averaged concentrations as only available real ambient levels. While the methods give good pollutant roses in general, this paper especially deals with the case of pollutant roses with 'false' peaks. (orig.)
Prediction of oil palm production using the weighted average of fuzzy sets concept approach
Nugraha, R. F.; Setiyowati, Susi; Mukhaiyar, Utriweni; Yuliawati, Apriliani
2015-12-01
Proper planning becomes crucial for decision making in a company. For oil palm producer companies, the prediction of future products realizations is useful and considered in making company's strategies. It is mean that to do the best in predicting is absolute. Until now, to predict the next monthly oil palm productions, the company use simple mean statistics of the latest five-year observations. Lately, imprecision in estimates of oil palm production (overestimate) becomes a problem and the focus of attention in a company. Here we proposed weighted mean approach by using fuzzy concept approach to do estimation and prediction. We obtain that the prediction using fuzzy concept almost always give underestimate of realizations than the simple mean.
Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.
2017-09-01
Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.
Kabala, Z. J.
1997-08-01
Under the assumption that local solute dispersion is negligible, a new general formula (in the form of a convolution integral) is found for the arbitrary k-point ensemble moment of the local concentration of a solute convected in arbitrary m spatial dimensions with general sure initial conditions. From this general formula new closed-form solutions in m=2 spatial dimensions are derived for 2-point ensemble moments of the local solute concentration for the impulse (Dirac delta) and Gaussian initial conditions. When integrated over an averaging window, these solutions lead to new closed-form expressions for the first two ensemble moments of the volume-averaged solute concentration and to the corresponding concentration coefficients of variation (CV). Also, for the impulse (Dirac delta) solute concentration initial condition, the second ensemble moment of the solute point concentration in two spatial dimensions and the corresponding CV are demonstrated to be unbound. For impulse initial conditions the CVs for volume-averaged concentrations axe compared with each other for a tracer from the Borden aquifer experiment. The point-concentration CV is unacceptably large in the whole domain, implying that the ensemble mean concentration is inappropriate for predicting the actual concentration values. The volume-averaged concentration CV decreases significantly with an increasing averaging volume. Since local dispersion is neglected, the new solutions should be interpreted as upper limits for the yet to be derived solutions that account for local dispersion; and so should the presented CVs for Borden tracers. The new analytical solutions may be used to test the accuracy of Monte Carlo simulations or other numerical algorithms that deal with the stochastic solute transport. They may also be used to determine the size of the averaging volume needed to make a quasi-sure statement about the solute mass contained in it.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Kováčik, L; Kereïche, S; Matula, P; Raška, I
2014-01-01
Electron tomographic reconstructions suffer from a number of artefacts arising from effects accompanying the processes of acquisition of a set of tilted projections of the specimen in a transmission electron microscope and from its subsequent computational handling. The most pronounced artefacts usually come from imprecise projection alignment, distortion of specimens during tomogram acquisition and from the presence of a region of missing data in the Fourier space, the "missing wedge". The ray artefacts caused by the presence of the missing wedge can be attenuated by the angular image filter, which attenuates the transition between the data and the missing wedge regions. In this work, we present an analysis of the influence of angular filtering on the resolution of averaged repetitive structural motives extracted from three-dimensional reconstructions of tomograms acquired in the single-axis tilting geometry.
Boroushaki, Soheil; Malczewski, Jacek
2008-04-01
This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.
Volume-Averaged Model of Inductively-Driven Multicusp Ion Source
Patel, Kedar K.; Lieberman, M. A.; Graf, M. A.
1998-10-01
A self-consistent spatially averaged model of high-density oxygen and boron trifluoride discharges has been developed for a 13.56 MHz, inductively coupled multicusp ion source. We determine positive ion, negative ion, and electron densities, the ground state and metastable densities, and the electron temperature as functions of the control parameters: gas pressure, gas flow rate, input power and reactor geometry. Neutralization and fragmentation into atomic species are assumed for all ions hitting the wall. For neutrals, a wall recombination coefficient for oxygen atoms and a wall sticking coefficient for boron trifluoride (BF_3) and its dissociation products are the single adjustable parameters used to model the surface chemistry. For the aluminum walls of the ion source used in the Eaton ULE2 ion implanter, complete wall recombination of O atoms is found to give the best match to the experimental data for oxygen, whereas a sticking coefficient of 0.62 for all neutral species in a BF3 discharge was found to best match experimental data.
GASP- General Aviation Synthesis Program. Volume 5: Weight
Hague, D.
1978-01-01
Subroutines for determining the weights of propulsion system related components and the airframe components of an aircraft configuration are presented. Subroutines that deal with design load conditions, aircraft balance, and tail sizing are included. Options for turbine and internal combustion engines are provided.
Directory of Open Access Journals (Sweden)
Karin KANDANANOND
2010-12-01
Full Text Available The objective of this research is to select the appropriate control charts for detecting a shift in the autocorrelated observations. The autocorrelated processes were characterized using AR (1 and IMA (1, 1 for stationary and non-stationary processes respectively. A process model was simulated to achieve the response, the average run length (ARL. The empirical analysis was conducted to quantify the impacts of critical factors e.g., AR coefficient (f, MA coefficient (q, types of charts and shift sizes on the ARL. The results showed that the exponentially weighted moving average (EWMA was the most appropriate control chart to monitor AR (1 and IMA (1, 1 processes because of its sensitivity. For non-stationary case, the ARL at positive q was significantly higher than the one at negative q when a shift size was small. If the performance of the statistical process control under stationary and non-stationary disturbances is correctly characterized, practitioners will have guidelines for achieving the highest possible performance potential when deploying SPC.
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Energy Technology Data Exchange (ETDEWEB)
Chan Yuleung; Law Manyee; Howard, Robert [Chinese University of Hong Kong, Department of Diagnostic Radiology and Organ Imaging, Prince of Wales Hospital, Hong Kong (China); Li Chikong; Chik Kiwai [Chinese University of Hong Kong, Department of Paediatrics, Prince of Wales Hospital, Hong Kong (China)
2005-02-01
It is not known whether body weight alone can adjust for the volume of liver in the calculation of the chelating dose in {beta}-thalassaemia major patients, who frequently have iron overload and hepatitis. The hypothesis is that liver volume in children and adolescents suffering from {beta}-thalassaemia major is affected by ferritin level and liver function. Thirty-five {beta}-thalassaemia major patients aged 7-18 years and 35 age- and sex-matched controls had liver volume measured by MRI. Serum alanine aminotransferase (ALT) and ferritin levels were obtained in the thalassaemia major patients. Body weight explained 65 and 86% of the change in liver volume in {beta}-thalassaemia major patients and age-matched control subjects, respectively. Liver volume/kilogram body weight was significantly higher (P<0.001) in thalassaemia major patients than in control subjects. There was a significant correlation between ALT level and liver volume/kilogram body weight (r=0.55, P=0.001). Patients with elevated ALT had significantly higher liver volume/kilogram body weight (mean 42.9{+-}12 cm{sup 3}/kg) than control subjects (mean 23.4{+-}3.6 cm{sup 3}/kg) and patients with normal ALT levels (mean 27.4{+-}3.6 cm{sup 3}/kg). Body weight is the most important single factor for liver-volume changes in thalassaemia major patients, but elevated ALT also has a significant role. Direct liver volume measurement for chelation dose adjustment may be advantageous in patients with elevated ALT. (orig.)
LENUS (Irish Health Repository)
Dowling, Adam H
2011-06-01
The aim was to investigate the influence of number average molecular weight and concentration of the poly(acrylic) acid (PAA) liquid constituent of a GI restorative on the compressive fracture strength (σ) and modulus (E).
Full-custom design of split-set data weighted averaging with output register for jitter suppression
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
Directory of Open Access Journals (Sweden)
Chang-Feng Chi
2014-07-01
Full Text Available In the current study, the relationships between functional properties and average molecular weight (AMW of collagen hydrolysates from Spanish mackerel (Scomberomorous niphonius skin were researched. Seven hydrolysate fractions (5.04 ≤ AMW ≤ 47.82 kDa from collagen of Spanish mackerel skin were obtained through the processes of acid extraction, proteolysis, and fractionation using gel filtration chromatography. The physicochemical properties of the collagen hydrolysate fractions were studied by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE, gel filtration chromatography, scanning electron microscope (SEM and Fourier transform infrared spectroscopy (FTIR. The results indicated that there was an inverse relationship between the antioxidant activities and the logarithm of the AMW of the hydrolysate fractions in the tested AMW range. However, the reduction of AMW significantly enhanced the solubility of the hydrolysate fractions, and a similar AMW decrease of the hydrolysate fractions negatively affected the emulsifying and foaming capacities. This presented as a positive correlation between the logarithm of AMW and emulsion stability index, emulsifying activity index, foam stability, and foam capacity. Therefore, these collagen hydrolysates with excellent antioxidant activities or good functionalities as emulsifiers could be obtained by controlling the effect of the digestion process on the AMW of the resultant hydrolysates.
Directory of Open Access Journals (Sweden)
Ignacio Vélez-Pareja
2009-12-01
Full Text Available Most finance textbooks present the Weighted Average Cost of Capital (WACC calculation as: WACC = Kd×(1-T×D% + Ke×E%, where Kd is the cost of debt before taxes, T is the tax rate, D% is the percentage of debt on total value, Ke is the cost of equity and E% is the percentage of equity on total value. All of them precise (but not with enough emphasis that the values to calculate D% y E% are market values. Although they devote special space and thought to calculate Kd and Ke, little effort is made to the correct calculation of market values. This means that there are several points that are not sufficiently dealt with: Market values, location in time, occurrence of tax payments, WACC changes in time and the circularity in calculating WACC. The purpose of this note is to clear up these ideas, solve the circularity problem and emphasize in some ideas that usually are looked over. Also, some suggestions are presented on how to calculate, or estimate, the equity cost of capital.
Brain volume reduction predicts weight development in adolescent patients with anorexia nervosa.
Seitz, Jochen; Walter, Martin; Mainz, Verena; Herpertz-Dahlmann, Beate; Konrad, Kerstin; von Polier, Georg
2015-09-01
Acute anorexia nervosa (AN) is associated with marked brain volume loss potentially leading to neuropsychological deficits. However, the mechanisms leading to this brain volume loss and its influencing factors are poorly understood and the clinical relevance of these brain alterations for the outcome of these AN-patients is yet unknown. Brain volumes of 56 female adolescent AN inpatients and 50 healthy controls (HCs) were measured using MRI scans. Multiple linear regression analyses were used to determine the impact of body weight at admission, prior weight loss, age of onset and illness duration on volume loss at admission and to analyse the association of brain volume reduction with body weight at a 1-year follow-up (N = 25). Cortical and subcortical grey matter (GM) and cortical white matter (WM) but not cerebellar GM or WM were associated with low weight at admission. Amount of weight loss, age of onset and illness duration did not independently correlate with any volume changes. Prediction of age-adjusted standardized body mass index (BMI-SDS) at 1-year follow-up could be significantly improved from 34% of variance explained by age and BMI-SDS at admission to 47.5-53% after adding cortical WM, cerebellar GM or WM at time of admission. Whereas cortical GM changes appear to be an unspecific reflection of current body weight ("state marker"), cortical WM and cerebellar volume losses seem to indicate a longer-term risk (trait or "scar" of the illness), which appear to be important for the prediction of weight rehabilitation and long-term outcome. Copyright © 2015 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Larsen, Inge-Lise; Hjulsager, Charlotte Kristiane; Holm, Anders;
2016-01-01
the efficacy of three oral dosage regimens (5, 10 and 20mg/kg body weight) of oxytetracycline (OTC) in drinking water over a five-day period on diarrhoea, faecal shedding of LI and average daily weight gain (ADG). A randomised clinical trial was carried out in four Danish pig herds. In total, 539 animals from...
Normal gray and white matter volume after weight restoration in adolescents with anorexia nervosa.
Lázaro, Luisa; Andrés, Susana; Calvo, Anna; Cullell, Clàudia; Moreno, Elena; Plana, M Teresa; Falcón, Carles; Bargalló, Núria; Castro-Fornieles, Josefina
2013-12-01
The aim of this study was to determine whether treated, weight-stabilized adolescents with anorexia nervosa (AN) present brain volume differences in comparison with healthy controls. Thirty-five adolescents with weight-recovered AN and 17 healthy controls were assessed by means of psychopathology scales and magnetic resonance imaging. Axial three-dimensional T1-weighted images were obtained in a 1.5 Tesla scanner and analyzed using optimized voxel-based morphometry (VBM). There were no significant differences between controls and weight-stabilized AN patients with regard to global volumes of either gray or white brain matter, or in the regional VBM study. Differences were not significant between patients with psychopharmacological treatment and without, between those with amenorrhea and without, as well as between patients with restrictive versus purgative AN. The present findings reveal no global or regional gray or white matter abnormalities in this sample of adolescents following weight restoration. Copyright © 2013 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Don-Roger Parkinson
2016-02-01
Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
Uncertain Generalized Ordered Weighted Averaging Operator and Its Application%不确定型GOWA算子及其应用
Institute of Scientific and Technical Information of China (English)
郏爱霞; 徐迎军
2015-01-01
把Yager提出的广义有序加权平均（GOWA ）算子推广到所给定的数据信息均为区间直觉模糊数形式的不确定环境中。利用基于得分函数和精确函数的区间直觉模糊数之间比较的方法，提出了一种不确定型GOWA算子，并且介绍了一种利用区间直觉模糊GOWA 算子进行群决策的通用方法。并把区间直觉模糊GOWA算子在食品生产企业供应链管理中进行了应用，数值结果表明了新算子的有效性和可行性。%This paper extends the generalized ordered weighted averaging (GOWA ) operator of Yager to accommodate uncertain conditions w here all input arguments take the forms of interval‐valued intuitionistic fuzzy numbers .Based on the method for the comparison between two interval‐valued intu‐itionistic fuzzy numbers based on score function and accuracy function ,an uncertain GOWA operator is proposed ,and a multi‐attribute group decision‐making method based on uncertain GOWA operator is also presented .The proposed operator is applied in food manufacturer supply chain management .A numerical example is given to show the feasibility and effectiveness of the method .
Directory of Open Access Journals (Sweden)
José Roberto Soares Scolforo
2004-06-01
Full Text Available This research aimed at studying the behavior of volume, dry weight, oil content and fence post quantity per diametric class of candeia; to define its stack factor, with and without diametric class control; and to determine equations for estimating the main stem and branches volume, dry weight and oil content of the hole tree, trunk, branches and leaves and fencepost quantity. Data were obtained from a forest inventory carried out in a native candeia forest located in Aiuruoca county, Minas Gerais state - Brazil. Tree volume was calculated through the Huber formula and oil extraction methods employed were “the solvent method” and “vapor hauling method”. For estimation volume, dry weight, oil content and fencepost number the traditional double entry models were used. The oil weight in 1 cubic meter of wood of small dimensions, trees between 5 and 10cm, is around 6 kilos, while the oil content of the largest trees, between 40 and 45cm, is around 11 kilos. The same tendency is observed for the wood volume, without bark, and in the volume of piled up wood, although in these last situations the magnitude of quantities are different. The oil content of candeia trees present the following behavior: in the stem and branches up to 3cm of diameter with bark, it varies from 1,02% to 1,37%, respectively to plants with diameter between 5 and 10cm and between 40 and 45cm; in the branches with less than 3cm of diameter with bark, it varies from 0,33% to 0,65%,respectively, to plants between 5 and 10cm and between 40 and 45cm of diameter; in the leaves it varies from 0,28% to 0,77%, respectively, for plants between 5-10cm and between 40-45cm of diameter. The average stack factor is 1.9087 and diminishes as diameter classes increases. The best model for estimating oil content, dry weight, fence post quantity and volume is the logarithmic form of Schumacher-Hall model.
Institute of Scientific and Technical Information of China (English)
T. Wang; B.Pustal; M. Abondano; T. Grimmig; A. B(u)hrig-Polaczek; M. Wu; A. Ludwig
2005-01-01
The cooling channel process is a rehocasting method by which the prematerial with globular microstructure can be produced to fit the thixocasting process. A three-phase model based on volume averaging approach is proposed to simulate the cooling channel process of A356 Aluminum alloy. The three phases are liquid, solid and air respectively and treated as separated and interacting continua, sharing a single pressure field. The mass, momentum, enthalpy transport equations for each phase are solved. The developed model can predict the evolution of liquid, solid and air fraction as well as the distribution of grain density and grain size. The effect of pouring temperature on the grain density, grain size and solid fraction is analyzed in detail.
Heyes, D. M.; Smith, E. R.; Dini, D.; Zaki, T. A.
2011-07-01
It is shown analytically that the method of planes (MOP) [Todd, Evans, and Daivis, Phys. Rev. E 52, 1627 (1995)] and volume averaging (VA) [Cormier, Rickman, and Delph, J. Appl. Phys. 89, 99 (2001), 10.1063/1.1328406] formulas for the local pressure tensor, Pα, y(y), where α ≡ x, y, or z, are mathematically identical. In the case of VA, the sampling volume is taken to be an infinitely thin parallelepiped, with an infinite lateral extent. This limit is shown to yield the MOP expression. The treatment is extended to include the condition of mechanical equilibrium resulting from an imposed force field. This analytical development is followed by numerical simulations. The equivalence of these two methods is demonstrated in the context of non-equilibrium molecular dynamics (NEMD) simulations of boundary-driven shear flow. A wall of tethered atoms is constrained to impose a normal load and a velocity profile on the entrained central layer. The VA formula can be used to compute all components of Pαβ(y), which offers an advantage in calculating, for example, Pxx(y) for nano-scale pressure-driven flows in the x-direction, where deviations from the classical Poiseuille flow solution can occur.
Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume
Berto, Filippo; Lazzarin, Paolo
2014-01-01
When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell
Li, Ri; Zhou, Liming; Wang, Jian; Li, Yan
2017-02-01
Based on solidification theory and a volume-averaged multiphase solidification model, the solidification process of NH4Cl-70 pct H2O was numerically simulated and experimentally verified. Although researchers have investigated the solidification process of NH4Cl-70 pct H2O, most existing studies have been focused on analysis of a single phenomenon, such as the formation of channel segregation, convection types, and the formation of grains. Based on prior studies, by combining numerical simulation and experimental investigation, all phenomena of the entire computational domain of the solidification process of an NH4Cl aqueous solution were comprehensively investigated for the first time in this study. In particular, the sedimentation of equiaxed grains in the ingot and the induced convection were reproduced. In addition, the formation mechanism of segregation was studied in depth. The calculation demonstrated that the equiaxed grains settled from the wall of the mold and gradually aggregated at the bottom of the mold; when the volume fraction reached a critical value, the columnar grains stopped growing, thus completing the columnar-to-equiaxed transition (CET). Because of solute partitioning, negative segregation occurred at the bottom region of the ingot concentrated with grains, whereas a wide range of positive segregation occurred in the unsolidified, upper part of the ingot. Experimental investigation indicated that the predicted results of the sedimentation of the equiaxed grains in the ingot and the convection types agreed well with the experimental results, thus revealing that the sedimentation of solid phase and convection in the solidification process are the key factors responsible for macrosegregation.
Gratton, Steven
2010-01-01
In this paper we present a path integral formulation of stochastic inflation, in which volume weighting can easily be implemented. With an in-depth study of inflation in a quartic potential, we investigate how the inflaton evolves and how inflation typically ends both with and without volume weighting. Perhaps unexpectedly, complex histories sometimes emerge with volume weighting. The reward for this excursion into the complex plane is an insight into how volume-weighted inflation both loses memory of initial conditions and ends via slow-roll. The slow-roll end of inflation mitigates certain "Youngness Paradox"-type criticisms of the volume-weighted paradigm. Thus it is perhaps time to rehabilitate proper time volume weighting as a viable measure for answering at least some interesting cosmological questions.
Energy Technology Data Exchange (ETDEWEB)
Regini, F., E-mail: francesco.regini@yahoo.it [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Department of Experimental and Clinical Biomedical Sciences – Radiodiagnostic Unit 2 – University of Florence- Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Gourtsoyianni, S., E-mail: sofia.gourtsoyianni@gstt.nhs.uk [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Division of Imaging Sciences and Biomedical Engineering, King' s College London, King' s Health Partners, St. Thomas’ Hospital, London (United Kingdom); Cardoso De Melo, R., E-mail: rafaelgoiein@gmail.com [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Charles-Edwards, G.D., E-mail: geoff.charles-edwards@kcl.ac.uk [Division of Imaging Sciences and Biomedical Engineering, King' s College London, King' s Health Partners, St. Thomas’ Hospital, London (United Kingdom); Medical Physics, Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Griffin, N., E-mail: nyree.griffin@gatt.nhs.uk [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Division of Imaging Sciences and Biomedical Engineering, King' s College London, King' s Health Partners, St. Thomas’ Hospital, London (United Kingdom); Parikh, J., E-mail: jyoti.parikh@gstt.nhs.uk [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); Rottenberg, G., E-mail: giles.rottenberg@gstt.nhs.uk [Department of Radiology,Guy' s and St Thomas’ NHS Foundation Trust, London (United Kingdom); and others
2014-05-15
Purpose: To compare the rectal tumour gross target volume (GTV) delineated on T2 weighted (T2W MRI) and diffusion weighted MRI (DWI) images by two different observers and to assess if agreement is improved by DWI. Material and methods: 27 consecutive patients (15 male, range 27.1–88.8 years, mean 66.9 years) underwent 1.5 T MRI prior to chemoradiation (45 Gy in 25 fractions; oral capecitabine 850 mg/m{sup 2}), including axial T2W MRI (TR = 6600 ms, TE = 90 ms) and DWI (TR = 3000 ms, TE = 77 ms, b = 0, 100, 800 s/mm{sup 2}). 3D tumour volume (cm{sup 3}) was measured by volume of interest (VOI) analysis by two independent readers for the T2W MRI and b800 DWI axial images, and the T2W MRI and DWI volumes compared using Mann–Whitney test. Observer agreement was assessed using Bland–Altman statistics. Significance was at 5%. Results: Artefacts precluded DWI analysis in 1 patient. In the remaining 26 patients evaluated, median (range) T2W MRI MRI and DWI (b = 800 s/mm{sup 2}) 3D GTVin cm{sup 3} were 33.97 (4.44–199.8) and 31.38 (2.43–228), respectively, for Reader One and 43.78 (7.57–267.7) and 42.45 (3.68–251) for Reader Two. T2W MRI GTVs were slightly larger but not statistically different from DWI volumes: p = 0.52 Reader One; p = 0.92 Reader Two. Interobserver mean difference (95% limits of agreement) for T2W MRI and DWI GTVs were −9.84 (−54.96 to +35.28) cm{sup 3} and −14.79 (−54.01 to +24.43) cm{sup 3} respectively. Conclusion: Smaller DWI volumes may result from better tumour conspicuity but overall observer agreement is not improved by DWI.
Haptic Illusions: Biases in the perception of volume, weight and roughness
Kahrimanovic, M.
2011-01-01
The present thesis investigated the perception of volume, weight and roughness when exploring 3-dimensional objects by touch and/of vision, and examined whether these percepts were influenced by specific object properties (e.g shape, material). In perception research, the term bias has been used to
High versus standard volume enteral feeds to promote growth in preterm or low birth weight infants.
Abiramalatha, Thangaraj; Thomas, Niranjan; Gupta, Vijay; Viswanathan, Anand; McGuire, William
2017-09-12
Breast milk alone, given at standard recommended volumes (150 to 180 mL/kg/d), is not adequate to meet the protein, energy, and other nutrient requirements of growing preterm or low birth weight infants. One strategy that may be used to address these potential nutrient deficits is to give infants enteral feeds in excess of 200 mL/kg/d ('high-volume' feeds). This approach may increase nutrient uptake and growth rates, but concerns include that high-volume enteral feeds may cause feed intolerance, gastro-oesophageal reflux, aspiration pneumonia, necrotising enterocolitis, or complications related to fluid overload, including patent ductus arteriosus and bronchopulmonary dysplasia. To assess the effect on growth and safety of feeding preterm or low birth weight infants with high (> 200 mL/kg/d) versus standard (≤ 200 mL/kg/d) volume of enteral feeds. Infants in intervention and control groups should have received the same type of milk (breast milk, formula, or both), the same fortification or micronutrient supplements, and the same enteral feeding regimen (bolus, continuous) and rate of feed volume advancement.To conduct subgroup analyses based on type of milk (breast milk vs formula), gestational age or birth weight category of included infants (very preterm or VLBW vs preterm or LBW), presence of intrauterine growth restriction (using birth weight relative to the reference population as a surrogate), and income level of the country in which the trial was conducted (low or middle income vs high income) (see 'Subgroup analysis and investigation of heterogeneity'). We used the Cochrane Neonatal standard search strategy, which included searches of the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2) in the Cochrane Library; MEDLINE (1946 to November 2016); Embase (1974 to November 2016); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to November 2016), as well as conference proceedings, previous reviews, and trial
Changes in subcutaneous fat cell volume and insulin sensitivity after weight loss.
Andersson, Daniel P; Eriksson Hogling, Daniel; Thorell, Anders; Toft, Eva; Qvisth, Veronica; Näslund, Erik; Thörne, Anders; Wirén, Mikael; Löfgren, Patrik; Hoffstedt, Johan; Dahlman, Ingrid; Mejhert, Niklas; Rydén, Mikael; Arner, Erik; Arner, Peter
2014-07-01
Large subcutaneous fat cells associate with insulin resistance and high risk of developing type 2 diabetes. We investigated if changes in fat cell volume and fat mass correlate with improvements in the metabolic risk profile after bariatric surgery in obese patients. Fat cell volume and number were measured in abdominal subcutaneous adipose tissue in 62 obese women before and 2 years after Roux-en-Y gastric bypass (RYGB). Regional body fat mass by dual-energy X-ray absorptiometry; insulin sensitivity by hyperinsulinemic-euglycemic clamp; and plasma glucose, insulin, and lipid profile were assessed. RYGB decreased body weight by 33%, which was accompanied by decreased adipocyte volume but not number. Fat mass in the measured regions decreased and all metabolic parameters were improved after RYGB (P fat cell size correlated strongly with improved insulin sensitivity (P = 0.0057), regional changes in fat mass did not, except for a weak correlation between changes in visceral fat mass and insulin sensitivity and triglycerides. The curve-linear relationship between fat cell size and fat mass was altered after weight loss (P = 0.03). After bariatric surgery in obese women, a reduction in subcutaneous fat cell volume associates more strongly with improvement of insulin sensitivity than fat mass reduction per se. An altered relationship between adipocyte size and fat mass may be important for improving insulin sensitivity after weight loss. Fat cell size reduction could constitute a target to improve insulin sensitivity. © 2014 by the American Diabetes Association.
Guido, Gianluigi; Piper, Luigi; Prete, M Irene; Mileti, Antonio; Fonda, Marco
2016-08-01
Consumers tend to misunderstand the physical value of cash money by adopting improper anchors for their judgments (e.g., banknote size and shape, currency denominations, etc.). In a pilot study carried out on a sample of 242 participants (n = 116 men; M age = 29.6 year, SD = 10.8), a quantity distortion effect was demonstrated by evaluating consumers' misperceptions of different monetary quantities, either in terms of volume or weight, using banknotes of the same denomination (€50). A threshold value was found, for both volume (€876,324) and weight (€371,779), above (below) which consumers tend to overrate (underrate) monetary amounts. The theoretical and operative implications are discussed.
Yalanis, Georgia C.; Nag, Shayoni; Georgek, Jakob R.; Cooney, Carisa M.; Manahan, Michele A.; Rosson, Gedge D.
2015-01-01
Introduction: Impaired vascular perfusion in tissue expander (TE) breast reconstruction leads to mastectomy skin necrosis. We investigated factors and costs associated with skin necrosis in postmastectomy breast reconstruction. Methods: Retrospective review of 169 women with immediate TE placement following mastectomy between May 1, 2009 and May 31, 2013 was performed. Patient demographics, comorbidities, intraoperative, and postoperative outcomes were collected. Logistic regression analysis on individual variables was performed to determine the effects of tissue expander fill volume and mastectomy specimen weight on skin necrosis. Billing data was obtained to determine the financial burden associated with necrosis. Results: This study included 253 breast reconstructions with immediate TE placement from 169 women. Skin necrosis occurred in 20 flaps for 15 patients (8.9%). Patients with hypertension had 8 times higher odds of skin necrosis [odd ratio (OR), 8.10, P 300 cm3 had 10 times higher odds of skin necrosis (OR, 10.66, P =0.010). Volumes >400 cm3 had 15 times higher odds of skin necrosis (OR, 15.56, P = 0.002). Mastectomy specimen weight was correlated with skin necrosis. Specimens >500 g had 10 times higher odds of necrosis and specimens >1000 g had 18 times higher odds of necrosis (OR, 10.03 and OR, 18.43; P =0.003 and P Mastectomy skin necrosis was associated with a 50% increased inpatient charge. Conclusion: Mastectomy flap necrosis is associated with HTN, larger TE volumes and mastectomy specimen weights, resulting in increased inpatient charges. Conservative TE volumes should be considered for patients with hypertension and larger mastectomy specimens. PMID:26301139
Is resected stomach volume related to weight loss after laparoscopic sleeve gastrectomy?
Singh, Jagat Pal; Tantia, Om; Chaudhuri, Tamonas; Khanna, Shashi; Patil, Prateek H
2014-10-01
Laparoscopic sleeve gastrectomy (LSG) was initially performed as the first stage of biliopancreatic diversion with duodenal switch for the treatment of super-obese or high-risk obese patients but is now most commonly performed as a standalone operation. The aim of this prospective study was to investigate outcomes after LSG according to resected stomach volume. Between May 2011 and April 2013, LSG was performed in 102 consecutive patients undergoing bariatric surgery. Two patients were excluded, and data from the remaining 100 patients were analyzed in this study. Patients were divided into three groups according to the following resected stomach volume: 700-1,200 mL (group A, n = 21), 1,200-1,700 mL (group B, n = 62), and >1,700 mL (group C, n = 17). Mean values were compared among the groups by analysis of variance. The mean percentage excess body weight loss (%EBWL) at 3, 6, 12, and 24 months after surgery was 37.68 ± 10.97, 50.97 ± 13.59, 62.35 ± 11.31, and 67.59 ± 9.02 %, respectively. There were no significant differences in mean %EBWL among the three groups. Resected stomach volume was greater in patients with higher preoperative body mass index and was positively associated with resected stomach weight. Mean %EBWL after LSG was not significantly different among three groups of patients divided according to resected stomach volume. Resected stomach volume was significantly greater in patients with higher preoperative body mass index.
Study of relationship between volume of distribution and body weight application to amikacin.
Rughoo, L; Bourguignon, L; Maire, P; Ducher, M
2014-06-01
Amikacin use is difficult because of its narrow therapeutic and its pharmacokinetic variability. This variability of amikacin is not well known. To adapt amikacin the physician assumes that there is a linear and continuous relation between the volume of distribution and the body weight. The objective of our study was to evaluate the relationship between the volume of distribution (Vd) and the body weight (BW) using a non parametric statistical analysis of dependence so called Z method. Retrospective pharmacokinetic population study and statistic analysis. 872 patients receiving intravenous amikacin. The volume of distribution was modelled using the Non Parametric Adaptive Grid algorithm (NPAG) for a two-compartment model with intravenous infusion. Z coefficient was performed to evaluate the relationships between Vd and BW. For the 872 patients (mean age of 73 ± 17 years) dispatched as follow 53 % female and 47 % male, the analysis of the statistical relationships by the non parametric Z analysis showed a scattered linkage between Vd and BW. For the whole population, the relationship between Vd and BW was not linear (regression analysis). Z analysis demonstrated that only for 80 % of patients there is a relationship between Vd and BW. For these patients, regression analysis give a significant adjustment of a linear model (r = 0.47, p < 0.001). In the whole studied population there is not a continuous and linear relationship between Vd estimated by NPAG and the BW. These results underline the difficulties to adapt doses of amikacin with only BW information.
Energy Technology Data Exchange (ETDEWEB)
Oka, Kiyoshi; Yakushiji, Toshitake; Sato, Hiro; Mizuta, Hiroshi [Kumamoto University, Department of Orthopaedic and Neuro-Musculoskeletal Surgery, Faculty of Medical and Pharmaceutical Sciences, Kumamoto (Japan); Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical and Pharmaceutical Sciences, Kumamoto (Japan)
2010-02-15
The objective of this study was to evaluate whether the average apparent diffusion coefficient (ADC) or the minimum ADC is more useful for evaluating the chemotherapeutic response of osteosarcoma. Twenty-two patients with osteosarcoma were examined in this study. Diffusion-weighted (DW) and magnetic resonance (MR) images were performed for all patients before and after chemotherapy. The pre- and post-chemotherapy values were obtained both in the average and minimum ADC. The pre-chemotherapy values of the average ADC and minimum ADC respectively were compared with the post-chemotherapy values. In addition, the ADC ratios ([ADC{sub post} - ADC{sub pre}] / ADC{sub pre}) were calculated using the average ADC and the minimum ADC. Twenty-two patients with osteosarcomas were divided into two groups, those with a good response to chemotherapy ({>=} 90% tumor necrosis, n = 7) and those with a poor response (< 90% tumor necrosis, n = 15). The average ADC ratio and the minimum ADC ratio of the two groups were compared. With both the average ADC and the minimum ADC, post-chemotherapy values were significantly higher than pre-chemotherapy values (P < 0.05). The patients with a good response had a significantly higher minimum ADC ratio than those with a poor response (1.01 {+-} 0.22 and 0.55 {+-} 0.29 respectively, P < 0.05). However, with regard to the average ADC ratio, no significant difference was observed between the two groups (0.66 {+-} 0.18 and 0.46 {+-} 0.31 respectively, P = 0.19). The minimum ADC is useful for evaluating the chemotherapeutic response of osteosarcoma. (orig.)
A Volume-Weighting Cloud-in-Cell Model for Particle Simulation of Axially Symmetric Plasmas
Institute of Scientific and Technical Information of China (English)
李永东; 何锋; 刘纯亮
2005-01-01
A volume-weighting cloud-in-cell (VW-CIC) model is developed to implement the particle-in-cell (PIC) simulation in axially symmetric systems. This model gives a first-order accuracy in the cylindrical system, and it is incorporated into a PIC code. A planar diode with a finite-radius circular emitter is simulated with the code. The simulation results show that the VW-CIC model has a better accuracy and a lower noise than the conventional area-weighting cloud-in-cell (AW-CIC) model, especially on those points near the axis. The two-dimensional (2-D) space-charge-limited current density obtained from VW-CIC model is in better agreement with Lau's analytical result. This model is more suitable for 2.5-D PIC simulation of axially symmetric plasmas.
Strayer, Richard F.; Hummerick, Mary E.; Richards, Jeffrey T.; McCoy, LaShelle E.; Roberts, Michael S.; Wheeler, Raymond M.
2011-01-01
The fate of space-generated solid wastes, including trash, for future missions is under consideration by NASA. Several potential treatment options are under consideration and active technology development. Potential fates for space-generated solid wastes are: Storage without treatment; storage after treatment(s) including volume reduction, water recovery, sterilization, and recovery plus recycling of waste materials. Recycling might be important for partial or full closure scenarios because of the prohibitive costs associated with resupply of consumable materials. For this study, we determined the composition of trash returned from four recent STS missions. The trash material was 'Volume F' trash and other trash, in large zip-lock bags, that accompanied the Volume F trash. This is the first of two submitted papers on these wastes. This one will cover trash content, weight and water content. The other will report on the microbial Characterization of this trash. STS trash was usually made available within 2 days of landing at KSC. The Volume F bag was weighed, opened and the contents were catalogued and placed into one of the following categories: food waste (and containers), drink containers, personal hygiene items - including EVA maximum absorbent garments (MAGs)and Elbow packs (daily toilet wipes, etc), paper, and packaging materials - plastic firm and duct tape. Trash generation rates for the four STS missions: Total wet trash was 0.602 plus or minus 0.089 kg(sub wet) crew(sup -1) d(sup -1) containing about 25% water at 0.154 plus or minus 0.030 kg(sub water) crew(sup -1) d(sup -1) (avg plus or minus stdev). Cataloguing by category: personal hygiene wastes accounted for 50% of the total trash and 69% of the total water for the four missions; drink items were 16% of total weight and 16% water; food wastes were 22% of total weight and 15% of the water; office waste and plastic film were 2% and 11% of the total waste and did not contain any water. The results can be
Directory of Open Access Journals (Sweden)
Seyed Morteza Tayebi
Full Text Available Objective(sThe purpose of the present study was to evaluate the effect of Ramadan fasting and weight-lifting training on plasma volume, glucose, and lipids profile of male weight-lifter.Materials and MethodsForty male weight-lifters were recruited and divided into 4 groups (n=10 each and as the following groups: control (C, fasting (F, training (T and fasting-training (F-T. The T and F-T groups performed weight-lifting technique trainings and hypertrophy body building (3 sessions/week, 90 min/session. All subjects were asked to complete a medical examination as well as a medical questionnaire to ensure that they were not taking any medication, were free of cardiac, respiratory, renal, and metabolic diseases, and were not using steroids. Blood samples were taken at 24 hr before and 24 hr after one month of fasting and weight-lifting exercise. The plasma volume, fasting blood sugar (FBS, lipid profiles, and lipoproteins were analyzed in blood samples. ResultsBody weight and plasma volume showed significant (P< 0.05 decrease and increase in the F group (P< 0.05 respectively. Also, a significant reduction was observed in F-T group body weight (P< 0.01. A significant increase was found in FBS level of F group (P< 0.05. The lipid profiles and lipoproteins didn’t change significantly in C, F, T and the F-T groups.ConclusionThe effect of Ramadan fasting on body weight and plasma volumes may be closely related to the nutritional diet or biochemical response to fasting.
Yao, Dezhong
2017-02-14
Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.
Directory of Open Access Journals (Sweden)
Okuda Miyuki
2012-09-01
Full Text Available Abstract Introduction We were able to treat a patient with acute exacerbation of chronic obstructive pulmonary disease who also suffered from sleep-disordered breathing by using the average volume-assured pressure support mode of a Respironics V60 Ventilator (Philips Respironics: United States. This allows a target tidal volume to be set based on automatic changes in inspiratory positive airway pressure. This removed the need to change the noninvasive positive pressure ventilation settings during the day and during sleep. The Respironics V60 Ventilator, in the average volume-assured pressure support mode, was attached to our patient and improved and stabilized his sleep-related hypoventilation by automatically adjusting force to within an acceptable range. Case presentation Our patient was a 74-year-old Japanese man who was hospitalized for treatment due to worsening of dyspnea and hypoxemia. He was diagnosed with acute exacerbation of chronic obstructive pulmonary disease and full-time biphasic positive airway pressure support ventilation was initiated. Our patient was temporarily provided with portable noninvasive positive pressure ventilation at night-time following an improvement in his condition, but his chronic obstructive pulmonary disease again worsened due to the recurrence of a respiratory infection. During the initial exacerbation, his tidal volume was significantly lower during sleep (378.9 ± 72.9mL than while awake (446.5 ± 63.3mL. A ventilator that allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range was attached in average volume-assured pressure support mode, improving his sleep-related hypoventilation, which is often associated with the use of the Respironics V60 Ventilator. Polysomnography performed while our patient was on noninvasive positive pressure ventilation revealed obstructive sleep apnea syndrome (apnea-hypopnea index = 14, suggesting that his chronic
A class of the fourth order finite volume Hermite weighted essentially non-oscillatory schemes
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper,we developed a class of the fourth order accurate finite volume Hermite weighted essentially non-oscillatory(HWENO)schemes based on the work(Computers&Fluids,34:642-663(2005))by Qiu and Shu,with Total Variation Diminishing Runge-Kutta time discretization method for the two-dimensional hyperbolic conservation laws.The key idea of HWENO is to evolve both with the solution and its derivative,which allows for using Hermite interpolation in the reconstruction phase,resulting in a more compact stencil at the expense of the additional work.The main difference between this work and the formal one is the procedure to reconstruct the derivative terms.Comparing with the original HWENO schemes of Qiu and Shu,one major advantage of new HWENOschemes is its robust in computation of problem with strong shocks.Extensive numerical experiments are performed to illustrate the capability of the method.
Energy Technology Data Exchange (ETDEWEB)
Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)
2015-06-15
Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly
Vassiliou, Vassilios S; Wassilew, Katharina; Cameron, Donnie; Heng, Ee Ling; Nyktari, Evangelia; Asimakopoulos, George; de Souza, Anthony; Giri, Shivraman; Pierce, Iain; Jabbour, Andrew; Firmin, David; Frenneaux, Michael; Gatehouse, Peter; Pennell, Dudley J; Prasad, Sanjay K
2017-06-12
Our objectives involved identifying whether repeated averaging in basal and mid left ventricular myocardial levels improves precision and correlation with collagen volume fraction for 11 heartbeat MOLLI T 1 mapping versus assessment at a single ventricular level. For assessment of T 1 mapping precision, a cohort of 15 healthy volunteers underwent two CMR scans on separate days using an 11 heartbeat MOLLI with a 5(3)3 beat scheme to measure native T 1 and a 4(1)3(1)2 beat post-contrast scheme to measure post-contrast T 1, allowing calculation of partition coefficient and ECV. To assess correlation of T 1 mapping with collagen volume fraction, a separate cohort of ten aortic stenosis patients scheduled to undergo surgery underwent one CMR scan with this 11 heartbeat MOLLI scheme, followed by intraoperative tru-cut myocardial biopsy. Six models of myocardial diffuse fibrosis assessment were established with incremental inclusion of imaging by averaging of the basal and mid-myocardial left ventricular levels, and each model was assessed for precision and correlation with collagen volume fraction. A model using 11 heart beat MOLLI imaging of two basal and two mid ventricular level averaged T 1 maps provided improved precision (Intraclass correlation 0.93 vs 0.84) and correlation with histology (R (2) = 0.83 vs 0.36) for diffuse fibrosis compared to a single mid-ventricular level alone. ECV was more precise and correlated better than native T 1 mapping. T 1 mapping sequences with repeated averaging could be considered for applications of 11 heartbeat MOLLI, especially when small changes in native T 1/ECV might affect clinical management.
Digital Repository Service at National Institute of Oceanography (India)
Chatterji, A.; Rathod, V.; Parulekar, A.H.
whereas, minimum (30 ml) in higher salinities during summer and post flood periods. The body weight of the crab was found to be affectEd. by the fluctuations in salinity. During flood period (October-November) average body weight of the crab increas...
Volume weighting the measure of the universe from classical slow-roll expansion
Sloan, David; Silk, Joseph
2016-05-01
One of the most frustrating issues in early universe cosmology centers on how to reconcile the vast choice of universes in string theory and in its most plausible high energy sibling, eternal inflation, which jointly generate the string landscape with the fine-tuned and hence relatively small number of universes that have undergone a large expansion and can accommodate observers and, in particular, galaxies. We show that such observations are highly favored for any system whereby physical parameters are distributed at a high energy scale, due to the conservation of the Liouville measure and the gauge nature of volume, asymptotically approaching a period of large isotropic expansion characterized by w =-1 . Our interpretation predicts that all observational probes for deviations from w =-1 in the foreseeable future are doomed to failure. The purpose of this paper is not to introduce a new measure for the multiverse, but rather to show how what is perhaps the most natural and well-known measure, volume weighting, arises as a consequence of the conservation of the Liouville measure on phase space during the classical slow-roll expansion.
DEFF Research Database (Denmark)
Jensen, Dan B.; Toft, Nils; Cornou, Cécile
2014-01-01
Pigs are known to be particularly sensitive to heat and cold. If the temperature becomes too low, the pigs will grow less efficiently and be more susceptible to diseases such as pneumonia. If the temperature is too high, the pigs will tend to foul the pen, leading to additional risks of infection...... producers and research stations have implemented a shielding to prevent winds from blowing between separate sections of the pig housing buildings. However, according to our search of the literature, no published studies have ever investigated the effectiveness of such shielding.To determine the significance...... of the effects of wind shielding, linear mixed models were fitted to describe the average daily weight gain and feed conversion rate of 1271 groups (14 individuals per group) of purebred Duroc, Yorkshire and Danish Landrace boars, as a function of shielding (yes/no), insert season (winter, spring, summer, autumn...
Davenport, Matthew S; Parikh, Kushal R; Mayo-Smith, William W; Israel, Gary M; Brown, Richard K J; Ellis, James H
2017-03-01
To determine the magnitude of subject-level and population-level cost savings that could be realized by moving from fixed-volume low-osmolality iodinated contrast material administration to an effective weight-based dosing regimen for contrast-enhanced abdominopelvic CT. HIPAA-compliant, institutional review board-exempt retrospective cohort study of 6,737 subjects undergoing contrast-enhanced abdominopelvic CT from 2014 to 2015. Subject height, weight, lean body weight (LBW), and body surface area (BSA) were determined. Twenty-six volume- and weight-based dosing strategies with literature support were compared with a fixed-volume strategy used at the study institution: 125 mL 300 mgI/mL for routine CT, 125 mL 370 mgI/mL for multiphasic CT (single-energy, 120 kVp). The predicted population- and subject-level effects on cost and contrast material utilization were calculated for each strategy and sensitivity analyses were performed. Most subjects underwent routine CT (91% [6,127/6,737]). Converting to lesser-volume higher-concentration contrast material had the greatest effect on cost; a fixed-volume 100 mL 370 mgI/mL strategy resulted in $132,577 in population-level savings with preserved iodine dose at routine CT (37,500 versus 37,000 mgI). All weight-based iodine-content dosing strategies (mgI/kg) with the same maximum contrast material volume (125 mL) were predicted to contribute mean savings compared with the existing fixed-volume algorithm ($4,053-$116,076/strategy in the overall study population, $1-$17/strategy per patient). Similar trends were observed in all sensitivity analyses. Large cost and material savings can be realized at abdominopelvic CT by adopting a weight-based dosing strategy and lowering the maximum volume of administered contrast material. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
Silva, Letícia; Barcelar, Jacqueline de Melo; Rattes, Catarina Souza; Sayão, Larissa Bouwman; Reinaux, Cyda Albuquerque; Campos, Shirley L; Brandão, Daniella Cunha; Fregonezi, Guilherme; Aliverti, Andrea; Dornelas de Andrade, Armèle
2015-02-01
The objective of this study was to analyze thoraco-abdominal kinematics in obese children in seated and supine positions during spontaneous quiet breathing. An observational study of pulmonary function and chest wall volume assessed by optoelectronic plethysmography was conducted on 35 children aged 8-12 years that were divided into 2 groups according to weight/height ratio percentiles: there were 18 obese children with percentiles greater than 95 and 17 normal weight children with percentiles of 5-85. Pulmonary function (forced expiratory volume in 1 s (FEV1); forced vital capacity (FVC); and FEV1/FVC ratio), ventilatory pattern, total and compartment chest wall volume variations, and thoraco-abdominal asynchronies were evaluated. Tidal volume was greater in seated position. Pulmonary and abdominal rib cage tidal volume and their percentage contribution to tidal volume were smaller in supine position in both obese and control children, while abdominal tidal volume and its percentage contribution was greater in the supine position only in obese children and not in controls. No statistically significant differences were found between obese and control children and between supine and seated positions regarding thoraco-abdominal asynchronies. We conclude that in obese children thoraco-abdominal kinematics is influenced by supine posture, with an increase of the abdominal and a decreased rib cage contribution to ventilation, suggesting that in this posture areas of hypoventilation can occur in the lung.
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Qingjie; Mao, Weijian
2017-01-01
The poroelastodynamic equations are used to describe the dynamic solid-fluid interaction in the reservoir. To obtain the intrinsic properties of reservoir rocks from geophysical data measured in both laboratory and field, we need an accurate solution of the wave propagation in porous media. At present, the poroelastic wave equations are mostly solved in the time domain, which involves a difficult and complicated time convolution. In order to avoid the issues caused by the time convolution, we propose a frequency-space domain method. The poroelastic wave equations are composed of a linear system in the frequency domain, which easily takes into account the effects of all frequencies on the dispersion and attenuation of seismic wave. A 25-point weighted-averaging finite different scheme is proposed to discretize the equations. For the finite model, the perfectly matched layer technique is applied at the model boundaries. We validated the proposed algorithm by testing three numerical examples of poroelastic models, which are homogenous, two-layered and heterogeneous with different fluids, respectively. The testing results are encouraging in the aspects of both computational accuracy and efficiency.
Directory of Open Access Journals (Sweden)
Yao-Ching Wang
Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Kristensen, Charlotte Sonne; Baadsgaard, Niels Peter; Toft, Nils
2011-03-01
The aim of this investigation was, through a meta-analysis, to review the published literature concerning the effect of PCV2 vaccination on the average daily weight gain (ADG) and on the mortality rate in pigs from weaning to slaughter. The review was restricted to studies investigating the effect of vaccines against PCV2 published from 2006 to 2008, identified using computerised literature databases. Only studies that met the following criteria were included: commercial vaccines were used, pigs or pens were assigned randomly to vaccination versus control groups in herds naturally infected with PCV2, and vaccinated and non-vaccinated pigs were housed together. Furthermore, it was a requirement that sample size, age at vaccination, and production period were stated. The levels of ADG and mortality rate had to be comparable to those seen in modern intensive swine production. In total, 107 studies were identified; 70 were excluded because they did not fulfil the inclusion criteria and 13 were identical to results published elsewhere. A significant effect of PCV2 vaccination on ADG was found for pigs in all production phases. The largest increase in ADG was found for finishing pigs (41.5g) and nursery-finishing pigs (33.6g) with only 10.6g increase in the nursery pigs. Mortality rate was significantly reduced for finishing pigs (4.4%) and nursery-finishing pigs (5.4%), but not for nursery pigs (0.25%). Herds negative for PRRS had a significantly larger increase in ADG compared to herds positive for PRRS. The PRRS status had no effect on mortality rate.
Mansouri, Ali; Fadavi, Ali; Mortazavian, Seyed Mohammad Mahdi
2016-05-21
Cumin (Cuminum cyminum Linn.) is valued for its aroma and its medicinal and therapeutic properties. A supervised feedforward artificial neural network (ANN) trained with back propagation algorithms, was applied to predict fresh weight and volume of Cuminum cyminum L. calli. Pearson correlation coefficient was used to evaluate input/output dependency of the eleven input parameters. Area, feret diameter, minor axis length, perimeter and weighted density parameters were chosen as input variables. Different training algorithms, transfer functions, number of hidden nodes and training iteration were studied to find out the optimum ANN structure. The network with conjugate gradient fletcher-reeves (CGF) algorithm, tangent sigmoid transfer function, 17 hidden nodes and 2000 training epochs was selected as the final ANN model. The final model was able to predict the fresh weight and volume of calli more precisely relative to multiple linear models. The results were confirmed by R(2)≥0.89, R(i)≥0.94 and T value ≥0.86. The results for both volume and fresh weight values showed that almost 90% of data had an acceptable absolute error of ±5%.
Patel, Niraj S; Doycheva, Iliana; Peterson, Michael R; Hooker, Jonathan; Kisselva, Tatiana; Schnabl, Bernd; Seki, Ekihiro; Sirlin, Claude B; Loomba, Rohit
2015-03-01
Little is known about how weight loss affects magnetic resonance imaging (MRI) of liver fat and volume or liver histology in patients with nonalcoholic steatohepatitis (NASH). We measured changes in liver fat and liver volume associated with weight loss by using an advanced MRI method. We analyzed data collected from a previous randomized controlled trial in which 43 adult patients with biopsy-proven NASH underwent clinical evaluation, biochemical tests, and MRI and liver biopsy analyses at the start of the study and after 24 weeks. We compared data between patients who did and did not have at least 5% decrease in body mass index (BMI) during the study period. Ten of 43 patients had at least a 5% decrease in BMI during the study period. These patients had a significant decrease in liver fat, which was based on MRI proton density fat fraction estimates (18.3% ± 7.6% to 13.6% ± 13.6%, P = .03), a relative 25.5% reduction. They also had a significant decrease in liver volume (5.3%). However, no significant changes in levels of alanine aminotransferase or aspartate aminotransferase were observed with weight loss. Thirty-three patients without at least 5% decrease in BMI had insignificant increases in estimated liver fat fraction and liver volume. A reduction in BMI of at least 5% is associated with significant decrease in liver fat and volume in patients with biopsy-proven NASH. These data should be considered in assessing effect size in studies of patients with nonalcoholic fatty liver disease or obesity that use MRI-estimated liver fat and volume as end points. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru
2016-10-11
An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T1-weighted images (3D-T1WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (≥0.800 kg/d) and the group with the least ADG (≤0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ≥800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (≥800 d). The earliest AFC group (≤749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (≥8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm
Energy Technology Data Exchange (ETDEWEB)
Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.
2011-02-15
Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Energy Technology Data Exchange (ETDEWEB)
Alexoff, D.L.; Alexoff, D.L.; Dewey, S.L.; Vaska, P.; Krishnamoorthy, S.; Ferrieri, R.; Schueller, M.; Schlyer, D.; Fowler, J.S.
2011-03-01
PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean {+-} S.D.) escaping the leaf parenchyma were measured to be 59 {+-} 1.1%, 64 {+-} 4.4% and 67 {+-} 1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
Energy Technology Data Exchange (ETDEWEB)
Barraclough, B; Lebron, S [J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL (United States); Li, J; Fan, Qiyong; Liu, C; Yan, G [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)
2015-06-15
Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and
Keshvari, Jafar; Heikkilä, Teemu
2011-12-01
Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.
Energy Technology Data Exchange (ETDEWEB)
Wirestam, R.; Knutsson, L.; Risberg, J.; Boerjesson, S.; Larsson, E.M.; Gustafson, L.; Passant, U.; Staahlberg, F. [Depts. of Medical Radiation Physics, Diagnostic Radiology, Psychiatry, and Psychogeriatrics, Lund Univ, Lund (Sweden)
2007-07-15
Background: Attempts to retrieve absolute values of cerebral blood flow (CBF) by dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) have typically resulted in overestimations. Purpose: To improve DSC-MRI CBF estimates by calibrating the DSC-MRI-based cerebral blood volume (CBV) with a corresponding T1-weighted (T1W) steady-state (ss) CBV estimate. Material and Methods: 17 volunteers were investigated by DSC-MRI and 133Xe SPECT. Steady-state CBV calculation, assuming no water exchange, was accomplished using signal values from blood and tissue, before and after contrast agent, obtained by T1W spin-echo imaging. Using steady-state and DSC-MRI CBV estimates, a calibration factor K = CBV(ss)/CBV(DSC) was obtained for each individual. Average whole-brain CBF(DSC) was calculated, and the corrected MRI-based CBF estimate was given by CBF(ss) = KxCBF(DSC). Results: Average whole-brain SPECT CBF was 40.1{+-}6.9 ml/min 100 g, while the corresponding uncorrected DSC-MRI-based value was 69.2{+-}13.8 ml/mi 100 g. After correction with the calibration factor, a CBF(ss) of 42.7{+-}14.0 ml/min 100 g was obtained. The linear fit to CBF(ss)-versus-CBF(SPECT) data was close to proportionality (R = 0.52). Conclusion: Calibration by steady-state CBV reduced the population average CBF to a reasonable level, and a modest linear correlation with the reference 133Xe SPECT technique was observed. Possible explanations for the limited accuracy are, for example, large-vessel partial-volume effects, low post-contrast signal enhancement in T1W images, and water-exchange effects.
Directory of Open Access Journals (Sweden)
I. Morino
2010-12-01
Full Text Available Column-averaged volume mixing ratios of carbon dioxide and methane retrieved from the Greenhouse gases Observing SATellite (GOSAT Short-Wavelength InfraRed observation (GOSAT SWIR X_{CO2} and X_{CH4} were compared with the reference data obtained by ground-based high-resolution Fourier Transform Spectrometers (g-b FTSs participating in the Total Carbon Column Observing Network (TCCON.
Through calibrations of g-b FTSs with airborne in-situ measurements, the uncertainty of X_{CO2} and X_{CH4} associated with the g-b FTS was determined to be 0.8 ppm (~0.2% and 4 ppb (~0.2%, respectively. The GOSAT products are validated with these calibrated g-b FTS data. Preliminary results are as follows: The GOSAT SWIR X_{CO2} and X_{CH4} (Version 01.xx are biased low by 8.85 ± 4.75 ppm (2.3 ± 1.2% and 20.4 ± 18.9 ppb (1.2 ± 1.1%, respectively. The precision of the GOSAT SWIR X_{CO2} and X_{CH4} is considered to be about 1%. The latitudinal distributions of zonal means of the GOSAT SWIR X_{CO2} and X_{CH4} show similar features to those of the g-b FTS data.
Temmerman, Frederik; Ho, Thien Ahn; Vanslembrouck, Ragna; Coudyzer, Walter; Billen, Jaak; Dobbels, Fabienne; van Pelt, Jos; Bammens, Bert; Pirson, Yves; Nevens, Frederik
2015-12-01
Polycystic liver disease (PCLD) can induce malnutrition owing to extensive hepatomegaly and patients might require liver transplantation. Six months of treatment with the somatostatin analogue lanreotide (120 mg) reduces liver volume. We investigated the efficacy of a lower dose of lanreotide and its effects on nutritional status. We performed an 18-month prospective study at 2 tertiary medical centers in Belgium from January 2011 through August 2012. Fifty-nine patients with symptomatic PCLD were given lanreotide (90 mg, every 4 weeks) for 6 months. Patients with reductions in liver volume of more than 100 mL (responders, primary end point) continued to receive lanreotide (90 mg) for an additional year (18 months total). Nonresponders were offered increased doses, up to 120 mg lanreotide, until 18 months. Liver volume and body composition were measured by computed tomography at baseline and at months 6 and 18. Patients also were assessed by the PCLD-specific complaint assessment at these time points. Fifty-three patients completed the study; 21 patients (40%) were responders. Nineteen of the responders (90%) continued as responders until 18 months. At this time point, they had a mean reduction in absolute liver volume of 430 ± 92 mL. In nonresponders (n = 32), liver volume increased by a mean volume of 120 ± 42 mL at 6 months. However, no further increase was observed after dose escalation in the 24 patients who continued to the 18-month end point. All subjects had decreased scores on all subscales of the PCLD-specific complaint assessment, including better food intake (P = .04). Subjects did not have a mean change in subcutaneous or visceral fat mass, but did have decreases in mean body weight (2 kg) and total muscle mass (1.06 cm(2)/h(2)). Subjects also had a significant mean reduction in their level of insulin-like growth factor 1, from 19% below the age-adjusted normal range level at baseline to 50% at 18 months (P = .002). In a prospective study, we
LoMauro, Antonella; Cesareo, Ambra; Agosti, Fiorenza; Tringali, Gabriella; Salvadego, Desy; Grassi, Bruno; Sartorio, Alessandro; Aliverti, Andrea
2016-06-01
The objective of this study was to characterize static and dynamic thoraco-abdominal volumes in obese adolescents and to test the effects of a 3-week multidisciplinary body weight reduction program (MBWRP), entailing an energy-restricted diet, psychological and nutritional counseling, aerobic physical activity, and respiratory muscle endurance training (RMET), on these parameters. Total chest wall (VCW), pulmonary rib cage (VRC,p), abdominal rib cage (VRC,a), and abdominal (VAB) volumes were measured on 11 male adolescents (Tanner stage: 3-5; BMI standard deviation score: >2; age: 15.9 ± 1.3 years; percent body fat: 38.4%) during rest, inspiratory capacity (IC) maneuver, and incremental exercise on a cycle ergometer at baseline and after 3 weeks of MBWRP. At baseline, the progressive increase in tidal volume was achieved by an increase in end-inspiratory VCW (p obese adolescents adopt a thoraco-abdominal operational pattern characterized by abdominal rib cage hyperinflation as a form of lung recruitment during incremental cycle exercise. Additionally, a short period of MBWRP including RMET is associated with improved exercise performance, lung and chest wall volume recruitment, unloading of respiratory muscles, and reduced dyspnea.
Boundedness of a class of weighted Hardy-Steklov averaging operators%一类加权Hardy-Steklov平均算子的有界性
Institute of Scientific and Technical Information of China (English)
郑庆玉; 张蕾
2011-01-01
The sufficient and necessary conditions of boundedness are given about the weights of a class of weighted Hardy-Steklov operators on both Lp and BMO spaces. Also, the corresponding operator norms are obtained, which provide a considerable important technical analysis in the study of predicting the future of a stock price.%给出一类加权Hardy-Steklov平均算子在Lp和BMO空间上关于权函数有界性的充要条件,并给出其算子范数,为Hardy-Stekdov平均算子在股票市场中应用提供重要理论分析工具.
Energy Technology Data Exchange (ETDEWEB)
Karlo, C., E-mail: christoph.karlo@usz.c [Institute of Diagnostic Radiology, Department of Radiology, University Hospital of Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Reiner, C.S.; Stolzmann, P. [Institute of Diagnostic Radiology, Department of Radiology, University Hospital of Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Breitenstein, S. [Department of Visceral Surgery, University Hospital of Zurich (Switzerland); Marincek, B. [Institute of Diagnostic Radiology, Department of Radiology, University Hospital of Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Weishaupt, D. [Institute for Radiology and Radiodiagnostics, City Hospital Triemli, Zurich (Switzerland); Frauenfelder, T. [Institute of Diagnostic Radiology, Department of Radiology, University Hospital of Zurich, Raemistrasse 100, 8091 Zurich (Switzerland)
2010-07-15
Objective: To compare virtual volume to intraoperative volume and weight measurements of resected liver specimen and calculate appropriate conversion factors to reach better correlation. Methods: Preoperative (CT-group, n = 30; MRI-group, n = 30) and postoperative MRI (n = 60) imaging was performed in 60 patients undergoing partial liver resection. Intraoperative volume and weight of the resected liver specimen was measured. Virtual volume measurements were performed by two readers (R1,R2) using dedicated software. Conversion factors were calculated. Results: Mean intraoperative resection weight/volume: CT: 855 g/852 mL; MRI: 872 g/860 mL. Virtual resection volume: CT: 960 mL(R1), 982 mL(R2); MRI: 1112 mL(R1), 1115 mL(R2). Strong positive correlation for both readers between intraoperative and virtual measurements, mean of both readers: CT: R = 0.88(volume), R = 0.89(weight); MRI: R = 0.95(volume), R = 0.92(weight). Conversion factors: 0.85(CT), 0.78(MRI). Conclusion: CT- or MRI-based volumetry of resected liver specimen is accurate and recommended for preoperative planning. A conversion of the result is necessary to improve intraoperative and virtual measurement correlation. We found 0.85 for CT- and 0.78 for MRI-based volumetry the most appropriate conversion factors.
Milojević, Vladimir S.; Ilić-Stojanović, Snežana; id_orcid 0000-0003-2416-8281; Nikolić, Ljubiša; Nikolić, Vesna; Stamenković, Jakov; Stojiljković, Dragan
2013-01-01
In this study, the synthesis of sodium-poly(acrylate) was performed by polymerization of acrylic acid in the water solution with three different contents of potassium-persulphate as an initiator. The obtained polymers were characterized by using HPLC and GPC analyses in order to define the purity and average molar mass of poly(acrylic acid). In order to investigate the influence of sodium-poly(acrylate) as a part of carbonate/zeolite detergent builder system, secondary washing characteristics...
Uematsu, Hidemasa; Maeda, Masayuki
2006-01-01
Perfusion-weighted magnetic resonance (MR) imaging using contrast agents plays a key role in characterizing tumors of the brain. We have shown that double-echo perfusion-weighted MR imaging (DEPWI) is potentially useful in assessing brain tumors. Quantitative indices, such as tumor blood volume, are obtained using DEPWI, which allows correction of underestimation of tumor blood volume due to leakage of contrast agents from tumor vessels, in addition to simultaneous acquisition of tumor vessel...
Soil weight (lbf/ft{sup 3}) at Hanford waste storage locations (2 volumes)
Energy Technology Data Exchange (ETDEWEB)
Pianka, E.W.
1994-12-01
Hanford Reservation waste storage tanks are fabricated in accordance with approved construction specifications. After an underground tank has been constructed in the excavation prepared for it, soil is place around the tank and compacted by an approved compaction procedure. To ensure compliance with the construction specifications, measurements of the soil compaction are taken by QA inspectors using test methods based on American Society for the Testing and Materials (ASTM) standards. Soil compaction tests data taken for the 241AP, 241AN, and 241AW tank farms constructed between 1978 and 1986 are included. The individual data values have been numerically processed to obtain average soil density values for each of these tank farms.
Evaluation on Waste Volume and Weight from Decommissioning of Kori Unit 1 Reactor Vessel
Energy Technology Data Exchange (ETDEWEB)
Choi, Yujeong; Lee, Seong-Cheol; Kim, Chang-Lak [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)
2015-05-15
In this paper, the concept of cutting reactor vessel and container for decommissioning Kori unit 1 has been investigated. As a result of the investigation, it is found that cutting the reactor vessel into small pieces, especially for upper and bottom heads of the reactor vessel, is more effective to reduce total disposal volume generated from decommissioning. As a part of continuing efforts to prepare shut down of nuclear power plant, several researches have been conducted to establish plans to dispose decommissioning waste from nuclear power plants. When decommissioning nuclear power plant, most of radioactive waste is generated from primary side including a reactor vessel. Radioactive waste amounts generated from decommissioning is significantly affected by several factors, such as dismantling method, waste classification, reactor lifetime, disposal method and etc.
Lancaster, J. W.
1975-01-01
Various types of lighter-than-air vehicles from fully buoyant to semibuoyant hybrids were examined. Geometries were optimized for gross lifting capabilities for ellipsoidal airships, modified delta planform lifting bodies, and a short-haul, heavy-lift vehicle concept. It is indicated that: (1) neutrally buoyant airships employing a conservative update of materials and propulsion technology provide significant improvements in productivity; (2) propulsive lift for VTOL and aerodynamic lift for cruise significantly improve the productivity of low to medium gross weight ellipsoidal airships; and (3) the short-haul, heavy-lift vehicle, consisting of a simple combination of an ellipsoidal airship hull and existing helicopter componentry, provides significant potential for low-cost, near-term applications for ultra-heavy lift missions.
Directory of Open Access Journals (Sweden)
Yusoff WAY
2015-01-01
Full Text Available Laser Sintering (LS allows functional parts to be produced in a wide range of powdered materials using a dedicated machine, and is thus gaining popularity within the field of rapid prototyping. It offers the user the ability to optimise part design in order to meet customer requirements with few manufacturing restrictions. A problem with LS is that sometimes the surface of the parts produced displays a texture similar to that of the skin of an orange (the so-called “orange peel” texture. The main aim of this research is to develop a methodology of controlling the input material properties of PA12 powder that will ensure consistent and good quality of the fabricated parts. Melt Flow Rate (MFR and Gel permeation chromatography (GPC were employed to measure the flow viscosity and molecular weight distributions of Polyamide PA12 powder grades. The experimental results proved that recycle PA12 powder with higher melt viscosity polymer has a higher entanglement with a longer molecule chain causes a higher resistance to flow which cause poor and rough surface finished on laser sintered part.
Institute of Scientific and Technical Information of China (English)
MAO Zhi-ping(毛志平); YANG Charles Q
2003-01-01
Durable press finishing of cotton fabrics with polycarboxylic acid increases fabric wrinkle-resistance at the expense of its mechanical strength.Severe tensile strength loss is the major disadvantage for wrinkle resistant cotton fabrics.Tensile strength loss of cotton fabric crosslinked by a polycarboxylic acid can be attributed to depolymerization and crosslink of cellulose molecules.Measurement of the molecular weight of cotton fabric before and after crosslinked by polycarboxylic acids can offer a possibility of direct understanding of the depolymerization.In this research,a multiple angle laser light scattering photometer was used to determine the absolute molecular weight of cotton fabric treated with BTCA at different pH and then hydrolyzed with 0.5 M NaOH solution at 50℃ for 144 h.The results indicate that average molecular weights of cotton fabric treated with polycarboxylic acids at different pH are almost the same.
Nash, R. E.; Loehr, J. A.; Lee, S. M. C.; English, K. L.; Evans, H.; Smith, S. A.; Hagan, R. D.
2009-01-01
Space flight-induced muscle atrophy, particularly in the postural and locomotorymuscles, may impair task performance during long-duration space missions and planetary exploration. High intensity free weight (FW) resistive exercise training has been shown to prevent atrophy during bed rest, a space flight analog. NASA developed the Advanced Resistive Exercise Device (ARED) to simulate the characteristics of FW exercise (i.e. constant mass, inertial force) and to be used as a countermeasure during International Space Station (ISS) missions. PURPOSE: To compare the efficacy of ARED and FW training to induce hypertrophy in specific muscle groups in ambulatory subjects prior to deploying ARED on the ISS. METHODS: Twenty untrained subjects were assigned to either the ARED (8 males, 3 females) or FW (6 males, 3 females) group and participated in a periodizedtraining protocol consisting of squat (SQ), heel raise (HR), and deadlift(DL) exercises 3 d wk-1 for 16 wks. SQ, HR, and DL muscle strength (1RM) was measured before, after 8 wks, and after 16 wks of training to prescribe exercise and measure strength changes. Muscle volume of the vastigroup (V), hamstring group (H), hip adductor group (ADD), medial gastrocnemius(MG), lateral gastrocnemius(LG), and deep posterior muscles including soleus(DP) was measured using MRI pre-and post-training. Consecutive cross-sectional images (8 mm slices with a 2 mm gap) were analyzed and summed. Anatomical references insured that the same muscle sections were analyzed pre-and post-training. Two-way repeated measures ANOVAs (pmuscle strength and volume between training devices. RESULTS: SQ, HR, and DL 1RM increased in both FW (SQ: 49+/-6%, HR: 12+/-2%, DL: 23+/-4%) and ARED (SQ: 31+/-4%, HR: 18+/-2%, DL: 23+/-3%) groups. Both groups increased muscle volume in the V (FW: 13+/-2%, ARED: 10+/-2%), H (FW: 3+/-1%, ARED: 3+/-1 %), ADD (FW: 15=/-2%, ARED: 10+/-1%), LG (FW: 7+/-2%, ARED: 4+/-1%), MG (FW: 7+/-2%, ARED: 5+/-2%), and DP (FW: 2
Liu, Hongxu; Jiao, Xiangmin
2016-06-01
ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes are widely used high-order schemes for solving partial differential equations (PDEs), especially hyperbolic conservation laws with piecewise smooth solutions. For structured meshes, these techniques can achieve high order accuracy for smooth functions while being non-oscillatory near discontinuities. For unstructured meshes, which are needed for complex geometries, similar schemes are required but they are much more challenging. We propose a new family of non-oscillatory schemes, called WLS-ENO, in the context of solving hyperbolic conservation laws using finite-volume methods over unstructured meshes. WLS-ENO is derived based on Taylor series expansion and solved using a weighted least squares formulation. Unlike other non-oscillatory schemes, the WLS-ENO does not require constructing sub-stencils, and hence it provides a more flexible framework and is less sensitive to mesh quality. We present rigorous analysis of the accuracy and stability of WLS-ENO, and present numerical results in 1-D, 2-D, and 3-D for a number of benchmark problems, and also report some comparisons against WENO.
Strayer, Richard F.; Hummerick, Mary E.; Richards, Jeffrey T.; McCoy, LaShelle E.; Roberts, Michael S.; Wheller, Raymond M.
2011-01-01
The project reported here provides microbial characterization support to the Waste Management Systems (WMS) element of NASA's Life Support and Habitation Systems (LSHS) program. Conventional microbiological methods were used to detect and enumerate microorganisms in STS Volume F Compartment trash for three shuttle missions: STS 133, 134, and 135. This trash was usually made available within 2 days of landing at KSC. The Volume F bag was weighed, opened and the contents were cataloged and placed into categories: personal hygiene items - inclUding EVA maximum absorbent garments (MAGs) and Elbow packs (daily toilet wipes, etc), drink containers, food waste (and containers), office waste (paper), and packaging materials - plastic film and duct tape. The average wet trash generation rate for the three STS missions was 0.362 % 0.157 kgwet crew 1 d-1 . This was considerably lower and more variable than the average rate for 4 STS missions reported for FY10. Trash subtotals by category: personal hygiene wastes, 56%; drink items, 11 %; food wastes, 18%; office waste, 3%; and plastic film, 12%. These wastes have an abundance of easily biodegraded compounds that can support the growth of microorganisms. Microbial characterization of trash showed that large numbers of bacteria and fungi have taken advantage of this readily available nutrient source to proliferate. Exterior and interior surfaces of plastic film bags containing trash were sampled and counts of cultivatable microbes were generally low and mostly occurred on trash bundles within the exterior trash bags. Personal hygiene wastes, drink containers, and food wastes and packaging all contained high levels of, mostly, aerobic heterotrophic bacteria and lower levels of yeasts and molds. Isolates from plate count media were obtained and identified .and were mostly aerobic heterotrophs with some facultative anaerobes. These are usually considered common environmental isolates on Earth. However, several pathogens were also
Zaman, Muhammad; Kim, Guinyun; Naik, Haladhara; Kim, Kwangsoo; Cho, Young-Sik; Lee, Young-Ok; Shin, Sung-Gyun; Cho, Moo-Hyun; Kang, Yeong-Rok; Lee, Man-Woo
2017-04-01
The flux-weighted average cross-sections of (γ , xn) reactions on natZn induced by the bremsstrahlung end-point energies of 50, 55, 60, and 65 MeV have been determined by activation and off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The theoretical photon-induced reaction cross-sections of natZn as a function of photon energy were taken from TENDL-2014 nuclear data library based on TALYS 1.6 program. The flux-weighted average cross-sections were obtained from the literature data and the theoretical values of TENDL-2014 based on mono-energetic photon. The flux-weighted reaction cross-sections from the present work and literature data at different bremsstrahlung end-point energies are in good agreement with the theoretical values. It was found that the individual natZn(γ , xn) reaction cross-sections increase sharply from reaction threshold to certain values where the next reaction channel opens. There after it remains constant for a while, where the next reaction channel increases. Then it decreases slowly with increase of bremsstrahlung end-point energy due to opening of different reaction channels.
McConnachie, Alex; Haig, Caroline; Sinclair, Lesley; Bauld, Linda; Tappin, David M
2017-07-20
The Cessation in Pregnancy Incentives Trial (CPIT), which offered financial incentives for smoking cessation during pregnancy showed a clinically and statistically significant improvement in cessation. However, infant birth weight was not seen to be affected. This study re-examines birth weight using an intuitive and a complier average causal effects (CACE) method to uncover important information missed by intention-to-treat analysis. CPIT offered financial incentives up to £400 to pregnant smokers to quit. With incentives, 68 women (23.1%) were confirmed non-smokers at primary outcome, compared to 25 (8.7%) without incentives, a difference of 14.3% (Fisher test, p birth weight gain with incentives is attributable only to potential quitters. We compared an intuitive approach to a CACE analysis. Mean birth weight of potential quitters in the incentives intervention group (who therefore quit) was 3338 g compared with potential quitters in the control group (who did not quit) 3193 g. The difference attributable to incentives, was 3338 - 3193 = 145 g (95% CI -617, +803). The mean difference in birth weight between the intervention and control groups was 21 g, and the difference in the proportion who managed to quit was 14.3%. Since the intervention consisted of the offer of incentives to quit smoking, the intervention was received by all women in the intervention group. However, "compliance" was successfully quitting with incentives, and the CACE analysis yielded an identical result, causal birth weight increase 21 g ÷ 0.143 = 145 g. Policy makers have great difficulty giving pregnant women money to stop smoking. This study indicates that a small clinically insignificant improvement in average birth weight is likely to hide an important clinically significant increase in infants born to pregnant smokers who want to stop but cannot achieve smoking cessation without the addition of financial voucher incentives. ISRCTN Registry, ISRCTN87508788
Institute of Scientific and Technical Information of China (English)
蒋庆哲; 宋昭峥; 柯明; 赵密福
2005-01-01
Polymerization of octodecyl acrylate is studied in four solvents - carbon tetrachloride, chloroform,methylbenzene and tetrachloroethane. Experimental results indicate that the sequence of chain transfer constants in solvents is: carbon tetrachloride＞chloroform＞methylbenzene＞tetrachloroethane in the polymerization of octadecyl acrylate. Influences of four solvents on solubility of polyoctadecyl acrylate prove not the same. In chloroform,polyoctadecyl acrylate shows the highest relative viscosity and the lowest chain termination rate constant. In higher conversion, the average relative molecular weight of polyoctadecyl acrylate depends mainly on the chain transfer constant of the solvent. Under the circumstance of monomer conversion higher than 30%, the viscosity effect induced by polymeric molecular shape in the solvents have a strong influence on the relative molecular weight of the polymer obtained.
Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Naik, Haladhara; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad; Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun; Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo; Ro, Tae-Ik
2016-07-01
We measured the flux-weighted average cross-sections and the isomeric yield ratios of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60MeV by the activation and the off-line γ-ray spectrometric technique, using the 100MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of 103Rh( γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions from the present work were compared with the literature data in the 103Rh(d, x), 102-99Ru(p, x) , 103Rh( α, αn) , 103Rh( α, 2p3n) , 102Ru(3He, x), and 103Rh( γ, xn) reactions. It was found that the IR values of 102, 101, 100, 99Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of 102, 101, 100, 99Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum.
Energy Technology Data Exchange (ETDEWEB)
Uematsu, Hidemasa [University of Fukui, Department of Radiology, Faculty of Medical Sciences, Fukui (Japan); Maeda, Masayuki [Mie University School of Medicine, Department of Radiology, Mie (Japan)
2006-01-01
Perfusion-weighted magnetic resonance (MR) imaging using contrast agents plays a key role in characterizing tumors of the brain. We have shown that double-echo perfusion-weighted MR imaging (DEPWI) is potentially useful in assessing brain tumors. Quantitative indices, such as tumor blood volume, are obtained using DEPWI, which allows correction of underestimation of tumor blood volume due to leakage of contrast agents from tumor vessels, in addition to simultaneous acquisition of tumor vessel permeability. This article describes basic concepts of DEPWI and demonstrates clinical applications in brain tumors. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Terk, M.R. [University of Southern California, Los Angeles, CA (United States). Dept. of Radiology; LAC/USC Imaging Science Center, Los Angeles, CA (United States); Dardashti, S. [University of Southern California, Los Angeles, CA (United States). Dept. of Radiology; Liebman, H.A. [University of Southern California, Los Angeles, CA (United States). Dept. of Medicine
2000-10-01
Purpose. To determine whether T1-weighted magnetic resonance (MR) images can demonstrate response in the marrow of patients with type 1 Gaucher disease treated with enzyme replacement therapy (ERT) and to determine whether a relationship exists between liver and spleen volume reductions and visible marrow changes.Patients. Forty-two patients with type 1 Gaucher disease were evaluated on at least two occasions. Thirty-two patients received ERT. Of these patients, 15 had a baseline examination prior to the initiation of ERT. The remaining 10 patients did not receive ERT.Design. T1-weighted and gradient recalled echo (GRE) coronal images of the femurs and hips were obtained. Concurrently, liver and spleen volumes were determined using contiguous breath-hold axial gradient-echo images. T1-weighted images of the hips and femurs were evaluated to determine change or lack of change in the yellow marrow.Results. Of the 32 patients receiving ERT, 14 (44%) demonstrated increased signal on T1-weighted images suggesting an increase in the amount of yellow marrow. If only the 15 patients with a baseline examination were considered, the response rate to ERT was 67%. Using Student's t-test a highly significant correlation (P<0.005) was found between marrow response and reduction in liver and spleen volume.Conclusions. Marrow changes in patients receiving ERT can be detected by T1-weighted images. This response correlated with reductions in visceral volumes (P<0.0005). (orig.)
Gongadze, Ekaterina; Iglič, Aleš
2013-03-01
Water ordering near a negatively charged electrode is one of the decisive factors determining the interactions of an electrode with the surrounding electrolyte solution or tissue. In this work, the generalized Langevin-Bikerman model (Gongadze-Iglič model) taking into account the cavity field and the excluded volume principle is used to calculate the space dependency of ions and water number densities in the vicinity of a highly charged surface. It is shown that for high enough surface charged densities the usual trend of increasing counterion number density towards the charged surface may be completely reversed, i.e. the drop in the counterions number density near the charged surface is predicted.
Directory of Open Access Journals (Sweden)
Bomba M
2015-03-01
Full Text Available Monica Bomba,1,* Anna Riva,1,* Sabrina Morzenti,2 Marco Grimaldi,3 Francesca Neri,1 Renata Nacinovich1 1Child and Adolescent Mental Health Department, San Gerardo Hospital, University of Milano-Bicocca, Monza, Italy; 2Medical Physics Department, San Gerardo Hospital, Monza, Italy; 3Department of Radiology, Humanitas Research Hospital, Milan, Italy *These authors contributed equally to this work Abstract: The recent literature on anorexia nervosa (AN suggests that functional and structural abnormalities of cortico-limbic areas might play a role in the evolution of the disease. We explored global and regional brain volumes in a cross-sectional and follow-up study on adolescents affected by AN. Eleven adolescents with AN underwent a voxel-based morphometry study at time of diagnosis and immediately after weight recovery. Data were compared to volumes carried out in eight healthy, age and sex matched controls. Subjects with AN showed increased cerebrospinal fluid volumes and decreased white and gray matter volumes, when compared to controls. Moreover, significant regional gray matter decrease in insular cortex and cerebellum was found at time of diagnosis. No regional white matter decrease was found between samples and controls. Correlations between psychological evaluation and insular volumes were explored. After weight recovery gray matter volumes normalized while reduced global white matter volumes persisted. Keywords: anorexia nervosa, adolescent, gray matter, insula, voxel-based morphometry study
DEFF Research Database (Denmark)
Larsson, Henrik B W; Courivaud, Frédéric; Rostrup, Egill
2009-01-01
Assessment of vascular properties is essential to diagnosis and follow-up and basic understanding of pathogenesis in brain tumors. In this study, a procedure is presented that allows concurrent estimation of cerebral perfusion, blood volume, and blood-brain permeability from dynamic T(1)-weighted...
Institute of Scientific and Technical Information of China (English)
刘言佳; 邹东旭; 蔡茜彤; 李铮; 芦明春
2012-01-01
将菊糖原料分级后，探究不同处理条件对热溶冷凝法制备不同质均分子质量菊糖凝胶的影响。采用分步分级法用乙醇将菊糖分为9个级分，SephadexG-50凝胶柱层析检测分级效果，采用黏度法测定各级分菊糖的质均分子质量，用热溶冷凝法制备菊糖凝胶，研究分子质量、加热温度和溶液浓度对菊糖凝胶形成的影响。结果表明：分离得到的各级菊糖的质均分子质量较为均一。菊糖的质均分子质量越大越易形成凝胶，小于一定数值则无法形成凝胶。凝胶的形成随处理温度的升高先易后难，温度过低或过高对凝胶形成都有负影响，70℃下形成凝胶所需的溶液浓度最小。随着菊糖溶液浓度升高，形成凝胶的温度范围增大，当浓度为35％时，可形成凝胶的温度范围最大。%Inulin was separated by different sizes, then the influence of different treatment on preparing inulin gels with different average molecular weight by heating-cooling method was investigated. Ethanol was used in step-by- step separation to separate inulin into 9 grades. The separation was also identified by SephadexG-50 gel column. The average molecular weight of each grade was measured by viseometer. Molecular weight, temperature and concentrationof the inulin on the gel formation were studied. The average molecular weight in each grade was the same. The larger the molecular weight was, the easier a gel formed. If it was less than a certain weight, gel could not be formed. With temperature increasing, the gel became more and more difficult to form. Too high or too low temperature had negative effects on gels' formation. When temperature gets to 70%, the concentration for forming a gel became the least. With the increasing of concentration, the temperature range increased. The temperature range gets to the widest when inulin concentration was 35%.
Energy Technology Data Exchange (ETDEWEB)
Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad [Kyungpook National University, Department of Physics, Daegu (Korea, Republic of); Naik, Haladhara [Bhabha Atomic Research Centre, Radiochemistry Division, Mumbai (India); Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk [Korea Atomic Energy Research Institute, Nuclear Data Center, Daejeon (Korea, Republic of); Shin, Sung-Gyun; Cho, Moo-Hyun [Pohang University of Science and Technology, Division of Advanced Nuclear Engineering, Pohang (Korea, Republic of); Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo [Dongnam Institute of Radiological and Medical Science, Research Center, Busan (Korea, Republic of); Ro, Tae-Ik [Dong-A University, Department of Materials Physics, Busan (Korea, Republic of)
2016-07-15
We measured the flux-weighted average cross-sections and the isomeric yield ratios of {sup 99m,g,100m,g,101m,g,102m,g}Rh in the {sup 103}Rh(γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60 MeV by the activation and the off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of {sup 103}Rh(γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of {sup 99m,g,100m,g,101m,g,102m,g}Rh in the {sup 103}Rh(γ, xn) reactions from the present work were compared with the literature data in the {sup 103}Rh(d, x), {sup 102-99}Ru(p, x), {sup 103}Rh(α, αn), {sup 103}Rh(α, 2p3n), {sup 102}Ru({sup 3}He, x), and {sup 103}Rh(γ, xn) reactions. It was found that the IR values of {sup 102,101,100,99}Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of {sup 102,101,100,99}Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum. (orig.)
DEFF Research Database (Denmark)
Weber, Nicolai Rosager; Pedersen, Ken Steen; Hansen, Christian Fink;
2017-01-01
after weaning) on average daily weight gain (ADG); (2) to compare the effect of treatment with doxycycline or tylosine on diarrhoea prevalence, pathogenic bacterial load, and ADG; (3) to evaluate PCR testing of faecal pen floor samples as a diagnostic tool for determining the optimal time of treatment...... difference (p = 0.04) of mean diarrhoea prevalence on day 21 of the study between pens treated with tylosine (0.254, 95% CI: 0.184–0.324), and doxycycline (0.167, 95% CI: 0.124–0.210). The type of antibiotic compound was not found to have a significant effect on ADG (p = 0.209). (3) Pigs starting treatment...... was achieved when treatment was initiated 14 days after weaning in pens where intestinal pathogens were detected. Doxycycline was more effective in reducing diarrhoea and LI excretion levels than treatment with tylosine....
Institute of Scientific and Technical Information of China (English)
孔晨燕; 谢从华; 苏剑峰; 于丹
2012-01-01
To remove the trailing noise, histogram fuzzy based filter denoising methods often have the problems of image blurring and residual noisy. To address this problem, the authors of this paper propose a new image de⁃noising method based on Generalized Gaussian Mixture (GGM) model and weighted average image filter. Firstly, the generalized Gaussian mixture model for image is constructed. Secondly, the noise data is determined accord⁃ing to the feature differences between this point and its neighbors. Finally, a weighted average filter is construct⁃ed by the GGM to build an image denoising. Histogram based filter and classical partial differential equation method are compared with the proposed method. Experimental results show that the method has a better denois⁃ing effect than the other methods.% 基于直方图的模糊滤波方法对图像的拖尾噪声去噪会导致图像模糊、残留的噪声较多等问题，本文提出一种新的基于广义高斯混合模型的图像去噪方法。首先，建立图像的广义高斯分布及其有限混合模型；其次，通过像素周围点特征值的变化范围确定噪声数据；最后，利用广义高斯函数构建一个加权平均滤波器进行图像去噪。对基于直方图的滤波方法、经典的偏微分方程和本文方法进行比较实验，结果表明本文方法具有更好的去噪效果。
Energy Technology Data Exchange (ETDEWEB)
Weijermars, Ruud, E-mail: R.Weijermars@TUDelft.nl [Alboran Energy Strategy Consultants and Department of Geotechnology, Delft University of Technology (Netherlands)
2011-10-15
The total annual revenue stream in the US natural gas value chain over the past decade is analyzed. Growth of total revenues has been driven by higher wellhead prices, which peaked in 2008. The emergence of the unconventional gas business was made possible in part by the pre-recessional rise in global energy prices. The general rise in natural gas prices between 1998 and 2008 did not lower overall US gas consumption, but shifts have occurred during the past decade in the consumption levels of individual consumer groups. Industry's gas consumption has decreased, while power stations increased their gas consumption. Commercial and residential consumers maintained flat gas consumption patterns. This study introduces the Weighted Average Cost of Retail Gas (WACORG) as a tool to calculate and monitor an average retail price based on the different natural gas prices charged to the traditional consumer groups. The WACORG also provides insight in wellhead revenues and may be used as an instrument for calibrating retail prices in support of wellhead price-floor regulation. Such price-floor regulation is advocated here as a possible mitigation measure against excessive volatility in US wellhead gas prices to improve the security of gas supply. - Highlights: > This study introduces an average retail price, WACORG. > WACORG can monitor price differentials for the traditional US gas consumer groups. > WACORG also provides insight in US wellhead revenues. > WACORG can calibrate retail prices in support of wellhead price-floor regulation. > Gas price-floor can improve security of gas supply by reducing price volatility.
Energy Technology Data Exchange (ETDEWEB)
Bongers, M.N. [Klinikum der Eberhard-Karls-Universitaet Tuebingen, Abteilung fuer Diagnostische und Interventionelle Radiologie, Tuebingen (Germany); Universitaetsklinikum Tuebingen, Sektion fuer Experimentelle Radiologie der Abteilung fuer Diagnostische und Interventionelle Radiologie, Tuebingen (Germany); Stefan, N.; Fritsche, A.; Haering, H.U. [Universitaetsklinikum Tuebingen, Innere Medizin IV - Endokrinologie und Diabetologie, Angiologie, Nephrologie und Klinische Chemie, Tuebingen (Germany); Helmholtz-Zentrum Muenchen an der Universitaet Tuebingen, Institut fuer Diabetes-Forschung und Metabolische Erkrankungen (IDM), Tuebingen (Germany); Nikolaou, K. [Klinikum der Eberhard-Karls-Universitaet Tuebingen, Abteilung fuer Diagnostische und Interventionelle Radiologie, Tuebingen (Germany); Schick, F. [Universitaetsklinikum Tuebingen, Sektion fuer Experimentelle Radiologie der Abteilung fuer Diagnostische und Interventionelle Radiologie, Tuebingen (Germany); Machann, J. [Universitaetsklinikum Tuebingen, Sektion fuer Experimentelle Radiologie der Abteilung fuer Diagnostische und Interventionelle Radiologie, Tuebingen (Germany); Helmholtz-Zentrum Muenchen an der Universitaet Tuebingen, Institut fuer Diabetes-Forschung und Metabolische Erkrankungen (IDM), Tuebingen (Germany); Deutsches Zentrum fuer Diabetesforschung (DZD), Neuherberg (Germany)
2015-04-01
The aim of this study was to investigate potential associations between changes in liver volume, the amount of intrahepatic lipids (IHL) and body weight during lifestyle interventions. In a prospective study 150 patients with an increased risk for developing type 2 diabetes mellitus were included who followed a caloric restriction diet for 6 months. In the retrospective analysis 18 women and 9 men (age range 22-71 years) with an average body mass index (BMI) of 32 kg/m{sup 2} were enrolled. The liver volume was determined at the beginning and after 6 months by three-dimensional magnetic resonance imaging (3D-MRI, echo gradient, opposed-phase) and IHLs were quantified by volume-selective MR spectroscopy in single voxel stimulated echo acquisition mode (STEAM). Univariable and multivariable correlation analyses between changes of liver volume (Δliver volume), intrahepatic lipids (ΔIHL) and body weight (ΔBW) were performed. Univariable correlation analysis in the whole study cohort showed associations between ΔIHL and ΔBW (r = 0.69; p < 0.0001), ΔIHL and Δliver volume (r = 0.66; p = 0.0002) as well as ΔBW and Δliver volume (r = 0.5; p = 0.0073). Multivariable correlation analysis revealed that changes of liver volume are primarily determined by changes in IHL independent of changes in body weight (β = 0.0272; 95 % CI: 0.0155-0.034; p < 0.0001). Changes of liver volume during lifestyle interventions are independent of changes of body weight primarily determined by changes of IHL. These results show the reversibility of augmented liver volume in steatosis if it is possible to reduce IHLs during lifestyle interventions. (orig.) [German] Lassen sich Zusammenhaenge zwischen den Aenderungen des Lebervolumens, des Anteils intrahepatischer Lipide und des Koerpergewichts waehrend einer Lebensstilintervention feststellen ?In einer prospektiven Interventionsstudie unterzogen sich 150 Probanden mit erhoehtem Diabetesrisiko fuer 6 Monate einer diaetetischen
Caltrans Average Annual Daily Traffic Volumes (2004)
California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...
Energy Technology Data Exchange (ETDEWEB)
Sun, Hongzan; Xin, Jun; Zhang, Shaomin; Guo, Qiyong; Lu, Yueyue; Zhai, Wei; Zhao, Long [Shengjing Hospital of China Medical University, Department of Radiology, Shenyang, Liaoning (China); Peng, Weiai [NM Marketing, Great China, Philips Healthcare, Guangzhou (China); Wang, Baijun [Philips China Investment Co. Ltd. Shenyang Office, Shenyang, Liaoning (China)
2014-05-15
To evaluate the concordance among {sup 18}F-FDG PET imaging, MR T2-weighted (T2-W) imaging and apparent diffusion coefficient (ADC) maps with diffusion-weighted (DW) imaging in cervical cancer using hybrid whole-body PET/MR. This study prospectively included 35 patients with cervical cancer who underwent pretreatment {sup 18}F-FDG PET/MR imaging. {sup 18}F-FDG PET and MR images were fused using standard software. The percent of the maximum standardized uptake values (SUV{sub max}) was used to contour tumours on PET images, and volumes were calculated automatically. Tumour volumes measured on T2-W and DW images were calculated with standard techniques of tumour area multiplied by the slice profile. Parametric statistics were used for data analysis. FDG PET tumour volumes calculated using SUV{sub max} (14.30 ± 4.70) and T2-W imaging volume (33.81 ± 27.32 cm{sup 3}) were similar (P > 0.05) at 35 % and 40 % of SUV{sub max} (32.91 ± 18.90 cm{sup 3} and 27.56 ± 17.19 cm{sup 3} respectively) and significantly correlated (P < 0.001; r = 0.735 and 0.766). The mean DW volume was 30.48 ± 22.41 cm{sup 3}. DW volumes were not significantly different from FDG PET volumes at either 35 % SUV{sub max} or 40 % SUV{sub max} or from T2-W imaging volumes (P > 0.05). PET subvolumes with increasing SUV{sub max} cut-off percentage showed an inverse change in mean ADC values on DW imaging (P < 0.001, ANOVA). Hybrid PET/MR showed strong volume concordance between FDG PET, and T2-W and DW imaging in cervical cancer. Cut-off at 35 % or 40 % of SUV{sub max} is recommended for {sup 18}F-FDG PET/MR SUV-based tumour volume estimation. The linear tumour subvolume concordance between FDG PET and DW imaging demonstrates individual regional concordance of metabolic activity and cell density. (orig.)
Institute of Scientific and Technical Information of China (English)
莫则尧
2001-01-01
A Multilevel Averaging Weight (MAW) dynamic load balancing method suitable for both synchronous and heterogeneous parallel computing environments is presented in this paper to solve the one-dimensional dynamic load imbalance problems arising from the parallel Lagrange numerical simulation of multiple matters non-steady fluid dynamics. At first, a one-dimensional load imbalance model is designed to simplify the theoretical analysis for the robustness of MAW method. For this model, the defined domain is uniformly differenced into grid cells, and every grid cells is assumed to be processed with different CPU time. Given P processors, it is required to find the efficient domain decomposition strategy which can keep the loads among different subdomains assigned to individual processors balanced. Secondly, we present a load balancing method, Averaging Weight (AW) method. The theoretical analysis shows that, while the number the processors is equal to 2, AW method is very efficient to adjust the system form a very imbalanced state to a very balanced state in 2—4 iterations. It is unfortunately that this conclusion can not be generalized to be suitable for larger number of processors. In further, inherited form the idea of the AW method, we designed another load balancing method, Multilevel Averaging Weight method. The similar theoretical analysis shows that this method can adjust the load to be balanced in ClogP iterations for any number of processors, where P is the number of processors and C is the iterations using AW method while P=2. This result is usually enough to efficiently track fluctuations in the load imbalance as the parallel numerical simulation progresses. Moreover, both AW and MAW method are all suitable for both homogeneous and heterogeneous parallel computing environments. Thirdly, we organize the numerical experiments for three types of load balancing models, and gain the same conclusions coincided with that of the theoretical analysis. At last, we
Institute of Scientific and Technical Information of China (English)
黄思旷
2015-01-01
Linguistic D numbers, as well as their operation, score function and corrected score function were defined.The linguistic D numbers prioritized weighted average ( LD-PWA) operator was proposed then. For the multi-criteria decision making problems that the criteria are in different priority level and the criteria values are linguistic D numbers, a multi-criteria decision-making method based on LD-PWA operator was proposed.In this method, the priority weights were attained using the corrected score function values of linguistic D numbers.Then, LD-PWA operator was used to reach the comprehensive criteria values.Finally, the alterna-tives were ranked by the score function.Analysis of an example demonstrated the feasibility and effectiveness of the method.%定义了语言D数及其运算法则、得分函数和修正得分函数以及语言D数的优先加权平均（ LD－PWA）算子，并针对准则具有优先关系且准则值为语言D数的多准则决策问题，提出了一种基于LD－PWA算子的多准则决策方法。该方法通过语言D数值的修正得分函数值得到其优先级权重，利用LD－PWA算子确定方案的综合准则值，并由语言D数的得分函数计算得到综合准则值的排序，进而确定方案的排序。实例分析验证了该方法的有效性和可行性。
Energy Technology Data Exchange (ETDEWEB)
Monroy Anton, J. L.; Solar Tortosa, M.; Lopez Munoz, M.; Navarro Bergada, A.; Estornell Gualde, M. A.; Melchor Iniguez, M.
2013-07-01
Our objective was to evaluate the V20 parameters and dose average compared to a single lung volume designed with a CT study in normal breathing of the patient and the corresponding to a lung volume composed, designed from three studies of CT in different phases of the respiratory cycle. Check if there are important differences in these cases that determine the necessity of creating a composite lung volume to evaluate dose volume histogram. (Author)
DEFF Research Database (Denmark)
Weber, Nicolai Rosager; Pedersen, Ken Steen; Hansen, Christian Fink
2017-01-01
after weaning) on average daily weight gain (ADG); (2) to compare the effect of treatment with doxycycline or tylosine on diarrhoea prevalence, pathogenic bacterial load, and ADG; (3) to evaluate PCR testing of faecal pen floor samples as a diagnostic tool for determining the optimal time of treatment...... LI-positive pens (p = 0.004), lower excretion levels of LI (p = 0.013), and fewer pens with a high level of LI (p = 0.031) compared to pens treated with tylosine. There was no significant difference in F4, F18 and PILO levels after treatment with the two antibiotic compounds. There was a significant...... difference (p = 0.04) of mean diarrhoea prevalence on day 21 of the study between pens treated with tylosine (0.254, 95% CI: 0.184–0.324), and doxycycline (0.167, 95% CI: 0.124–0.210). The type of antibiotic compound was not found to have a significant effect on ADG (p = 0.209). (3) Pigs starting treatment...
Institute of Scientific and Technical Information of China (English)
谭宝成; 李博
2015-01-01
Driverless vehicle in the process of moving,,need to use multi-sensor system to detect the enviroment of the road. but these sensors data information have some problems,such as overload, missing or inaccurate.we have to use data fusion technology to deal with the received data. this paper based on the driverless vehicle's multi-sensor system,make study on the weighted average of the data fusion algorithm,conform to the requirements of driverless vehicle's fusion level in the running enviroment. In the actual data fusion processing,it has high feasibility.%无人车在运行过程中，需要利用多传感器系统对周围道路环境进行观测，但这些传感器获取的数据信息存在着超载，丢失或不精确等问题，则需采用数据融合技术对所获数据加以优化处理。本文基于无人车的多传感器系统，对加权平均数据融合算法进行了研究，符合无人车运行环境下融合层次的要求，在实际的数据融合处理中具有很高的可行性。
Institute of Scientific and Technical Information of China (English)
彭会萍; 曹晓军; 杨永旭
2012-01-01
This paper introduced some important concepts,such as the evidence-distance,the evidence support degree,the evidence credibility and the decision-making distance measurement.By given some rationalize processing to the trust function data of evidence,and achieved the result of the multi-sensor information fusion using the D-S evidence combination rules,then,this paper proposed a weighted average algorithm of multi-sensor information fusion based on decision-distance.Numerical example shows that the method can effectively solve the D-S conflict,and can ensure the accuracy of the information fusion.%本文引入证据间距离、证据间支持度和可信度、决策距离测量等重要概念,对证据的信任函数数据进行合理化处理,并借助D-S证据合成规则实现多传感器的信息融合,从而提出基于决策距离的多传感器信息融合加权平均算法。算例分析表明,该方法能够有效解决D-S冲突问题,确保信息融合结果的准确度。
Parameterized Traveling Salesman Problem: Beating the Average
Gutin, G.; Patel, V.
2016-01-01
In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked
Energy Technology Data Exchange (ETDEWEB)
Fazleev, M.P.; Chekhov, O.S.; Ermakov, E.A.
1985-06-20
This paper discusses the results of an investigation of the gas content averaged over the volume, hydrodynamic programs, and foaming in the K/sub 2/O-V/sub 2/O/sub 5/ melt plus gas system, which is used as a catalyst in several thermocatalytic processes. The experimental setup is described and a comparison of literature data on the gas content of different gas-liquid systems under comparable conditions is presented. The authors were able to determine the boundaries of the hydrodynamic modes in a bubbling reactor and derive equations for the calculation of the gas content. It was found that the gas content of the melt increased when V/sub 2/O/sub 5/ was reduced to V/sub 2/O/sub 4/ in the reaction portion of the reaction-regeneration cycle. Regeneration of the melt restores the value of gas content to its original level.
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
Directory of Open Access Journals (Sweden)
William A Copen
Full Text Available In the treatment of patients with suspected acute ischemic stroke, increasing evidence suggests the importance of measuring the volume of the irreversibly injured "ischemic core." The gold standard method for doing this in the clinical setting is diffusion-weighted magnetic resonance imaging (DWI, but many authors suggest that maps of regional cerebral blood volume (CBV derived from computed tomography perfusion imaging (CTP can substitute for DWI. We sought to determine whether DWI and CTP-derived CBV maps are equivalent in measuring core volume.58 patients with suspected stroke underwent CTP and DWI within 6 hours of symptom onset. We measured low-CBV lesion volumes using three methods: "objective absolute," i.e. the volume of tissue with CBV below each of six published absolute thresholds (0.9-2.5 mL/100 g, "objective relative," whose six thresholds (51%-60% were fractions of mean contralateral CBV, and "subjective," in which two radiologists (R1, R2 outlined lesions subjectively. We assessed the sensitivity and specificity of each method, threshold, and radiologist in detecting infarction, and the degree to which each over- or underestimated the DWI core volume. Additionally, in the subset of 32 patients for whom follow-up CT or MRI was available, we measured the proportion of CBV- or DWI-defined core lesions that exceeded the follow-up infarct volume, and the maximum amount by which this occurred.DWI was positive in 72% (42/58 of patients. CBV maps' sensitivity/specificity in identifying DWI-positive patients were 100%/0% for both objective methods with all thresholds, 43%/94% for R1, and 83%/44% for R2. Mean core overestimation was 156-699 mL for objective absolute thresholds, and 127-200 mL for objective relative thresholds. For R1 and R2, respectively, mean±SD subjective overestimation were -11±26 mL and -11±23 mL, but subjective volumes differed from DWI volumes by up to 117 and 124 mL in individual patients. Inter-rater agreement
Hague, D. S.; Woodbury, N. W.
1975-01-01
The Mars system is a tool for rapid prediction of aircraft or engine characteristics based on correlation-regression analysis of past designs stored in the data bases. An example of output obtained from the MARS system, which involves derivation of an expression for gross weight of subsonic transport aircraft in terms of nine independent variables is given. The need is illustrated for careful selection of correlation variables and for continual review of the resulting estimation equations. For Vol. 1, see N76-10089.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Directory of Open Access Journals (Sweden)
Matthew D Blackledge
Full Text Available We describe our semi-automatic segmentation of whole-body diffusion-weighted MRI (WBDWI using a Markov random field (MRF model to derive tumor total diffusion volume (tDV and associated global apparent diffusion coefficient (gADC; and demonstrate the feasibility of using these indices for assessing tumor burden and response to treatment in patients with bone metastases. WBDWI was performed on eleven patients diagnosed with bone metastases from breast and prostate cancers before and after anti-cancer therapies. Semi-automatic segmentation incorporating a MRF model was performed in all patients below the C4 vertebra by an experienced radiologist with over eight years of clinical experience in body DWI. Changes in tDV and gADC distributions were compared with overall response determined by all imaging, tumor markers and clinical findings at serial follow up. The segmentation technique was possible in all patients although erroneous volumes of interest were generated in one patient because of poor fat suppression in the pelvis, requiring manual correction. Responding patients showed a larger increase in gADC (median change = +0.18, range = -0.07 to +0.78 × 10(-3 mm2/s after treatment compared to non-responding patients (median change = -0.02, range = -0.10 to +0.05 × 10(-3 mm2/s, p = 0.05, Mann-Whitney test, whereas non-responding patients showed a significantly larger increase in tDV (median change = +26%, range = +3 to +284% compared to responding patients (median change = -50%, range = -85 to +27%, p = 0.02, Mann-Whitney test. Semi-automatic segmentation of WBDWI is feasible for metastatic bone disease in this pilot cohort of 11 patients, and could be used to quantify tumor total diffusion volume and median global ADC for assessing response to treatment.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Energy Technology Data Exchange (ETDEWEB)
Carreira, M.
1965-07-01
As a working method for determination of changes in molecular mass that may occur by irradiation (pyrolytic-radiolytic decomposition) of polyphenyl reactor coolants, a cryoscopic technique has been developed which associated the basic simplicity of Beckman's method with some experimental refinements taken out of the equilibrium methods. A total of 18 runs were made on samples of napthalene, biphenyl, and the commercial mixtures OM-2 (Progil) and Santowax-R (Monsanto), with an average deviation from the theoretical molecular mass of 0.6%. (Author) 7 refs.
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Newall, Germán; Ruiz-Razura, Amado; Mentz, Henry A; Patronella, Christopher K; Ibarra, Francis R; Zarak, Alberto
2006-01-01
This retrospective study was designed to evaluate the efficacy of low-molecular-weight heparin (enoxeparin) as a prophylaxis for venous thromboembolism and deep venous thrombosis (DVT) in the management of large-volume liposuction, added body-contouring procedures, or both. The author present an 18-month experience with the use of this therapy for 291 consecutive patients. All the patients fell into the categories of high risk and highest risk for the development of deep vein thrombosis, embolism, or both. Three patients experienced transient DVT-like symptoms and underwent a thorough workup by an independent highly specialized critical care medical team. The results were found ultimately to be inconclusive for DVT and pulmonary embolism. However, all the patients experienced a complete recovery. The results show a 0% incidence of DVT and pulmonary embolism among patients who received enoxeparin as prophylaxis. The medication did not precipitate major bleeding when administered 1 h after surgery. This study offers the first report that describes the use of enoxeparin in aesthetic surgery for high-risk patients. The authors feel the need to inform their colleagues of the benefits obtained over the past 18 months by incorporating this therapy in large-volume liposuction and extensive body-contouring procedures performed during the same operative session. This study was conducted by a highly experienced surgical team in a fully accredited outpatient facility with established protocols for handling these types of procedures on a daily basis. The authors are optimistic about the results, and the use of enoxeparin is now part of their postoperative regimen in high-risk aesthetic surgery cases.
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Institute of Scientific and Technical Information of China (English)
Yasukazu Nakanishi; Iwao Fukui; Kazunori Kihara; Hitoshi Masuda; Satoru Kawakami; Mizuaki Sakura; Yasuhisa Fujii; Kazutaka Saito; Fumitaka Koga; Masaya Ito; Junji Yonese
2012-01-01
Anthropometric measurements,e.g.,body weight (BW),body mass index (BMI),as well as serum prostate-specific antigen (PSA) and percent-free PSA (％fPSA) have been shown to have positive correlations with total prostate volume (TPV).We developed an equation and nomegram for estimating TPV,incorporating these predictors in men with benign prostatic hyperplasia (BPH).A total of 1852 men,including 1113 at Tokyo Medical and Dental University (TMDU) Hospital as a training set and 739 at Cancer Institute Hospital (CIH) as a validation set,with PSA levels of up to 20 ng ml-1,who underwent extended prostate biopsy and were proved to have BPH,were enrolled in this study.We developed an equation for continuously coded TPV and a logistic regression-based nomngram for estimating a TPV greater than 40 ml.Predictive accuracy and performance characteristics were assessed using an area under the receiver operating characteristics curve (AUC) and calibration plots.The final linear regression model indicated age,PSA,％fPSA and BW as independent predictors of continuously coded TPV.For predictions in the training set,the multiple correlation coefficient was increased from 0.38 for PSA alone to 0.60 in the final model.We developed a novel nomogram incorporating age,PSA,％fPSA and BW for estimating TPV greater than 40 ml.External validation confirmed its predictive accuracy,with AUC value of 0.764.Calibration plots showed good agreement between predicted probability and observed proportion.In conclusion,TPV can be easily estimated using these four independent predictors.
Ensemble Averaged Gravity Theory
Khosravi, Nima
2016-01-01
We put forward the idea that all the theoretically consistent models of gravity have a contribution to the observed gravity interaction. In this formulation each model comes with its own Euclidean path integral weight where general relativity (GR) automatically has the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of $f(R,G)$ model. This specific $f(R,G)$ satisfies the stability conditions and has self-accelerating solution. Our model is consistent with the local tests of gravity since its behavior is same as GR for high-curvature regimes. In low-curvature regime the gravity force is weaker than GR which can interpret as existence of a repulsive fifth force for very large scales. Interestingly there is an intermediate-curvature regime where the gravity force is stronger in our model than GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes ...
Negative Average Preference Utilitarianism
Directory of Open Access Journals (Sweden)
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Energy Technology Data Exchange (ETDEWEB)
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
INTEREST ON EQUITY AND THE WEIGHTED AVERAGE COST OF CAPITAL
LUCAS AUGUSTO DE MORAIS PILOTO
2008-01-01
Diversos métodos são utilizados para o cálculo do valor justo de uma empresa. Dentre os métodos mais usados, estão o método do fluxo de caixa descontado, onde os fluxos de caixa estimados da empresa são trazidos a valor presente por uma taxa para se chegar a uma estimativa de valor da empresa. Esta taxa é uma média ponderada do custo de capital próprio e do custo de capital de terceiros, sendo conhecida pela sigla WACC. No Brasil, entretanto, existe uma peculiaridade na legi...
An Exponentially Weighted Moving Average Control Chart for Bernoulli Data
DEFF Research Database (Denmark)
Spliid, Henrik
2010-01-01
We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...
Peakedness of Weighted Averages of Jointly Distributed Random Variables.
1985-10-01
under the integral sign is permissible here, so that ah’(a) f L Ix =U (---gl(u)(t- u) du -2~ t-a t - au =f f(u, --- (t -u) du. t f tu i- "u’ t -u) du...differentiation under the integral sign , we note that f Ifu, t - ) Idu əlf -u ( )daa which follows from (2.1). -4- This condition is clearly not a
Directory of Open Access Journals (Sweden)
M.C.R. Holanda
2005-08-01
Full Text Available Avaliaram-se os efeitos da época de parto e idade da matriz ao parto (IMP sobre o tamanho da leitegada (TL, da época de parto, idade da matriz ao parto e tamanho da leitegada sobre o peso médio ao nascer (PMN, e da época de parto, idade da matriz ao parto, número de nascidos vivos (NV e percentual de machos na leitegada (PERCM sobre o peso aos 21 dias de idade (PM21 de leitões Large White. Utilizaram-se dados de 3259 leitões nascidos no período de junho/85 a junho/96. A avaliação foi feita por meio de regressão múltipla. Para TL apenas o efeito de IMP determinou modificações significativas sobre o número de leitões nascidos. TL médio foi 9,73±2,78, observando-se maiores leitegadas em fêmeas de 2,84 a 3,83 anos. PMN e PM21 foram 1,35kg±0,18 e 5,06kg±1,00, respectivamente. Para PMN foram significativos os efeitos de IMP e TL, com redução do peso em 20g para cada leitão adicional. Para PM21 apenas o número de NV apresentou efeito significativo.The effects of season of birth (PE and age of sow at birth (IMP on litter size (TL; season of birth, age of sow and litter size on average weight at birth (PMN; season of birth age of sow, number of alive piglets at birth (NV, and percentage of alive males on average weight at 21 days of age (PM21 of 3259 Large White piglets born from June/85 to June/96 were evaluated by multiple regression analyses. The IMP effect on TL was significant. The average TL was 9.73±2.78. Larger litters were observed for sows from 2.84 to 3.83 years of age. The average PMN and PM21 were 1.35kg±0.18 e 5.06kg±1.00, respectively. The IMP and TL effects on PMN traits were linear and significant. A decrease of 20g on piglet weight was estimated for each additional piglet in the litter. The effect of NV was significant only for PM21 trait.
... Anger Weight Management Weight Management Smoking and Weight Healthy Weight Loss Being Comfortable in Your Own Skin Your Weight Loss Expectations & Goals Healthier Lifestyle Healthier Lifestyle Physical Fitness Food & Nutrition Sleep, Stress & Relaxation Emotions & Relationships HealthyYouTXT ...
Dictionary Based Segmentation in Volumes
DEFF Research Database (Denmark)
Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley
2015-01-01
We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...
Institute of Scientific and Technical Information of China (English)
WU An-Cai; XU Xin-Jian; WU Zhi-Xi; WANG Ying-Hai
2007-01-01
We investigate the dynamics of random walks on weighted networks. Assuming that the edge weight and the node strength are used as local information by a random walker. Two kinds of walks, weight-dependent walk and strength-dependent walk, are studied. Exact expressions for stationary distribution and average return time are derived and confirmed by computer simulations. The distribution of average return time and the mean-square that a weight-dependent walker can arrive at a new territory more easily than a strength-dependent one.
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.
2017-03-01
Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average =8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.
Directory of Open Access Journals (Sweden)
Yu-Mei Liao
2013-06-01
Full Text Available Peripheral apheresis has become a safe procedure to collect hematopoietic stem cells, even in pediatric patients and donors. However, the apheresis procedure for small and sick children is more complicated due to difficult venous access, relatively large extracorporeal volume, toxicity of citrate, and unstable hemostasis. We report a small and sick child with refractory medulloblastoma, impaired liver function, and coagulopathy after several major cycles of cisplatin-based chemotherapy. She successfully received large-volume leukapheresis for hematopoietic stem cell collection, although the patient experienced severe coagulopathy during the procedures. Health care providers should be alert to this potential risk.
Models for predicting objective function weights in prostate cancer IMRT
Energy Technology Data Exchange (ETDEWEB)
Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada)
2015-04-15
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Directory of Open Access Journals (Sweden)
Márcio Fragoso Vieira
2008-04-01
Full Text Available OBJETIVO: determinar a acurácia do volume do braço fetal aferido pela ultra-sonografia tridimensional (USG3D na predição de peso ao nascimento. MÉTODOS: realizou-se um estudo prospectivo, do tipo corte transversal, com 25 gestantes sem anormalidades estruturais ou cromossomopatias. Os parâmetros bidimensionais (diâmetro biparietal, circunferência abdominal e comprimento do fêmur e o volume do braço fetal pela USG3D foram avaliados em até 48 horas antes do parto. Para o cálculo do volume do braço fetal, utilizou-se o método multiplanar, por meio de múltiplos planos seqüenciais com intervalos de 5,0 mm. Realizaram-se regressões polinomiais para se determinar a melhor equação de predição de peso fetal. A acurácia desta nova fórmula foi comparada com as fórmulas bidimensionais de Shepard e Hadlock. RESULTADOS: o volume do braço fetal foi altamente correlacionado com o peso ao nascimento (r=0,83; p0,05. Em relação à fórmula de Hadlock, apenas o erro médio foi menor, mas não estatisticamente significante (p>0,05. CONCLUSÕES: o volume do braço fetal aferido pela USG3D mostrou acurácia similar às fórmulas bidimensionais na predição do peso ao nascimento. Há necessidade de estudos com maiores casuísticas para se comprovar esses achados.PURPOSE: to evaluate the accuracy of fetal upper arm volume, using three-dimensional ultrasound (3DUS, in the prediction of birth weight. METHODS: this prospective cross-sectional study involved 25 pregnancies without structural or chromosomal anomalies. Bidimensional parameters (biparietal diameter, abdominal circumference and femur length and the 3DUS fetal upper arm volume were obtained in the last 48 hours before delivery. The multiplanar method, using multiple sequential planes with 5.0-mm intervals, was used to calculate fetal upper arm volume. Polynomial regressions were used to determine the best equation in the prediction of fetal weight. The accuracy of this new formula was
Dyk, Pawel; Jiang, Naomi; Sun, Baozhou; DeWees, Todd A; Fowler, Kathryn J; Narra, Vamsi; Garcia-Ramirez, Jose L; Schwarz, Julie K; Grigsby, Perry W
2014-11-15
Magnetic resonance imaging/diffusion weighted-imaging (MRI/DWI)-guided high-dose-rate (HDR) brachytherapy and (18)F-fluorodeoxyglucose (FDG) - positron emission tomography/computed tomography (PET/CT)-guided intensity modulated radiation therapy (IMRT) for the definitive treatment of cervical cancer is a novel treatment technique. The purpose of this study was to report our analysis of dose-volume parameters predicting gross tumor volume (GTV) control. We analyzed the records of 134 patients with International Federation of Gynecology and Obstetrics stages IB1-IVB cervical cancer treated with combined MRI-guided HDR and IMRT from July 2009 to July 2011. IMRT was targeted to the metabolic tumor volume and lymph nodes by use of FDG-PET/CT simulation. The GTV for each HDR fraction was delineated by use of T2-weighted or apparent diffusion coefficient maps from diffusion-weighted sequences. The D100, D90, and Dmean delivered to the GTV from HDR and IMRT were summed to EQD2. One hundred twenty-five patients received all irradiation treatment as planned, and 9 did not complete treatment. All 134 patients are included in this analysis. Treatment failure in the cervix occurred in 24 patients (18.0%). Patients with cervix failures had a lower D100, D90, and Dmean than those who did not experience failure in the cervix. The respective doses to the GTV were 41, 58, and 136 Gy for failures compared with 67, 99, and 236 Gy for those who did not experience failure (PD100, D90, and Dmean doses required for ≥90% local control to be 69, 98, and 260 Gy (P<.001). Total dose delivered to the GTV from combined MRI-guided HDR and PET/CT-guided IMRT is highly correlated with local tumor control. The findings can be directly applied in the clinic for dose adaptation to maximize local control. Copyright © 2014 Elsevier Inc. All rights reserved.
Target weight achievement and ultrafiltration rate thresholds: potential patient implications.
Flythe, Jennifer E; Assimon, Magdalene M; Overman, Robert A
2017-06-02
have unfavorable facility target weight measure scores. Without TT extension or IDWG reduction, UF rate threshold (13 mL/h/kg) implementation led to an average theoretical 1-month, fluid-related weight gain of 1.4 ± 3.0 kg. Target weight achievement patterns vary across clinical subgroups. Implementation of a maximum UF rate threshold without adequate attention to extracellular volume status may lead to fluid-related weight gain.
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
Sampling Based Average Classifier Fusion
Directory of Open Access Journals (Sweden)
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
Energy Technology Data Exchange (ETDEWEB)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8 (Canada); Techna Institute for the Advancement of Technology for Health, 124-100 College Street, Toronto, Ontario M5G 1P5 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148-150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148-150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124-100 College Street Toronto, Ontario M5G 1P5 (Canada)
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. A regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Energy Technology Data Exchange (ETDEWEB)
Perez-Candela, V.; Busto, C.; Avila, R.; Marrero, M. G.; Liminana, J. M.; Orengo, J. C. [Hospital Universitario Maternoinfantil de Canarias. Las Palmas de Gran Canaria (Spain)
2001-07-01
A prospective study to attempt to relate the anthropometric parameters of height, weight, body mass index as well as age with the mammographic patterns obtained for the patients and obtain an anthropometric profile was carried out. The study was performed in 1.000 women who underwent a mammography in cranial-caudal and medial lateral oblique projection of both breasts, independently of whether they were screened or diagnosed. Prior to the performance of the mammography, weight and height were obtained, and this was also performed by the same technicians, and the patient were asked their bra size to deduce breast volume. With the weight, the body mass index of Quetelet was calculated (weight [kg]/height''2 (ml)). After reading the mammography, the patient was assigned to one of the four mammographic patterns considered in the BIRADS (Breast Imaging Reporting and Data System) established by the ACR (American College of Radiology): type I (fat). type II (disperse fibroglandular densities), type III (fibroglandular densities distributed heterogeneously), type 4 (dense). The results were introduced into a computer database and the SPSS 8.0 statistical program was applied, using the statistical model of multivariant logistic regression. In women under 40 years, with normal weight, the dense breast pattern accounted for 67.8% and as the body mass index (BMI) increased, this pattern decreased to 25.1%. The fat pattern is 20% and as the BMI increases, this increased to 80%. In 40-60 year old women with normal weight, the dense pattern accounts for 44% and decreases to 20.9% in the grades II, III and IV obese. The fat pattern is 11.1% and increases to 53.7% in the grade II, III and IV obese. In women over 60 with normal, the dense pattern accounts for 19.3% and and decreases to 13% in the grade III obese. The fat pattern is 5.3% and increases to 20.2% in the grade iii of obesity. As age increases, the probability of presenting a mammographic pattern with a fat
Energy Technology Data Exchange (ETDEWEB)
Wilde, E.W.; Kastner, J.; Murphy, C.; Santo Domingo, J.
1997-05-28
SRTC and a panel of experts from off-site previously determined that composting was the most attractive alternative for reducing the volume and weight of biomass that was slightly radioactive. The SRTC proposed scope of work for Subtask 2 of TTP{number_sign} SR17SS53 and TTP{number_sign} SR18SS41 involves bench scale studies to assess the rates and efficiencies of various composting schemes for volume and weight reduction of leaf and stalk biomass (SB). Ultimately, the data will be used to design a composting process for biomass proposed by MSE for phytoremediation studies at SRS. This could drastically reduce costs for transporting and disposing of contaminated biomass resulting from a future major phytoremediation effort for soil clean-up at the site. The composting studies at SRTC includes collaboration with personnel from the University of Georgia, who will conduct chemical analyses of the plant material after harvest, pre-treatment, and composting for specific time periods. Parameters to be measured will include: lignin, hemicellulose, cellulose, carbon and nitrogen. The overall objective of this project is to identify or develop: (1) an inexpensive source of inoculum (consisting of nutrients and/or microorganisms) capable of significantly enhancing biomass degradation, (2) an optimum range of operating parameters for the composting process, and (3) a process design for the solid state degradation of lignocellulosic biomass contaminated with radionuclides that is superior to existing alternatives for dealing with such waste.
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Institute of Scientific and Technical Information of China (English)
张琦; 丛斌; 钱海涛
2012-01-01
In this experiment, the Asian com borer larvae were fed on the polymerization of Cry1Ab and Cry1F transgenic corn 335YH,696YH artificial diet. We study the impaction on survival and average weight of the Asian corn borer larvae. The results showed that 25% of the amount of processing, the corn borer larvae survival rate was less than 38%, the average weight was less than 45 mg. This indicates polymerization of the CrylAb and CrylF transgenic corn seed can inhibit the growth and survival of the Asian corn borer larvae. 335YH,on the 25% and 50% usage,the survival of the Asian corn borer are no significant difference,but 50% compared with 25% usage, larval on the latter grows better. Dose effects 696YH on larval lethality and inhibiting the growth.%实验用聚合crylAb和CrylF基因抗虫玉米335YH、696YH的人工饲料饲养亚洲玉米螟幼虫,研究人工饲料中聚合基因玉米对其存活率和平均体重的影响.结果表明:在25％用量处理下,玉米螟幼虫的存活率均低于38％,平均体重均低于45mg,说明聚合CrylAb和cryＩF基因抗虫玉米种子能抑制供试亚洲玉米螟幼虫的生长和存活.335YH25％用量和50％用量的致死性差异不大,但50％用量比25％用量抑制幼虫生长的效果更好.696YH品种对幼虫的致死性和抑制生长效果受剂量影响较大.
Annesi, James J
2013-09-01
Research suggests that obesity, physical inactivity, anxiety (psychological tension), and a poor diet are associated with high blood pressure (BP). Although medication is the treatment of choice, behavioral methods might also improve BP in individuals with both prehypertension and hypertension. Severely obese women from the southeast USA (N = 155; M(age) = 45 years; M(body mass index) (BMI) = 41 kg/m(2)) that fulfilled criteria for either prehypertension (n = 96) or hypertension (n = 59) volunteered for a Young Men's Christian Association-based exercise and nutrition support treatment that also included instruction in stress-management methods. Significant (p values of ≤0.001) within-group improvements over 26 weeks in tension, overall mood, exercise volume, fruit and vegetable consumption, BMI, and systolic and diastolic BP were found. There were significant (p values of exercise, fruit and vegetable intake, BMI, and systolic and diastolic BP improvements. Multiple regression analyses, separately entering changes in tension and overall mood along with changes in volume of exercise, fruit and vegetable intake, and BMI, explained 19 and 20 % of the variances in systolic BP, respectively, (p values of <0.001) and 8 % of the variances, each (p values of ≤0.02), in diastolic BP. In each multiple regression equation, improvements in the psychological factors of tension and overall mood demonstrated the greatest independent contribution to the variances accounted for in BP improvements. The ability of nonpharmaceutical, behavioral methods to improve BP in women with prehypertension and hypertension was suggested, with changes in the psychological factors of tension and overall mood appearing to be especially salient. Practical applications of findings were suggested.
... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...
On the average sensitivity of laced Boolean functions
jiyou, Li
2011-01-01
In this paper we obtain the average sensitivity of the laced Boolean functions. This confirms a conjecture of Shparlinski. We also compute the weights of the laced Boolean functions and show that they are almost balanced.
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
Energy Technology Data Exchange (ETDEWEB)
Johnson, E.W.
1988-10-01
The orbiter and probe portions of the National Aeronautics and Space Administration (NASA) Galileo spacecraft contain components which require auxiliary heat during the mission. To meet these needs, the Department of Energy's (DOE's) Office of Special Applications (OSA) has sponsored the design, fabrication, and testing of a one-watt encapsulated plutonium dioxide-fueled thermal heater named the Light-Weight Radioisotope Heater Unit (LWRHU). This report, prepared by Monsanto Research Corporation (MRC), addresses the radiological risks which might be encountered by people both at the launch area and worldwide should postulated mission failures or malfunctions occur, resulting in the release of the LWRHUs to the environment. Included are data from the design, mission descriptions, postulated accidents with their consequences, test data, and the derived source terms and personnel exposures for the various events. 11 refs., 44 figs., 11 tabs.
Directory of Open Access Journals (Sweden)
E. Nogueira
2006-08-01
Full Text Available Avaliou-se o efeito da suplementação de bezerros em sistema de creep feeding, em pastagens de Brachiaria brizantha, durante o período de amamentação, sobre o ganho médio diário (GMD, peso à desmama (PD e taxa de gestação, em delineamento inteiramente ao acaso, utilizando 102 vacas Nelore (primíparas de baixa condição corporal ao início da estação de monta e seus bezerros, divididos em dois grupos: T1 (n=52, não tratado e T2 (n=50, tratado com suplemento à base de 20% de PB e 75% de NDT. O consumo médio diário estimado no período foi 0,61kg de suplemento/bezerro/dia. Observaram-se diferenças entre T1 e T2 quanto ao PD (P0,05. A suplementação em creep feeding pode aumentar o GMD e o PD de bezerros Nelore, sem alterar a taxa de gestação de primíparas que iniciam a estação de monta com baixa condição corporal.The effect of creep feeding on average daily gain (ADG and weaning weight (WW of calves and pregnancy rate of dams was evaluated in Nelore cattle on Brachiaria Brizantha pasture. In a complete randomized design, calves were divided into two treatments: T1, the control group and T2, in which calves were provided a supplemental diet containing 20% CP and 75% TDN from 92 days after birth until weaning. The 102 primiparous cows were in low body condition at beginning of breeding season. Average daily consumption of the creep ration was 0.61kg per calf. WW averaged 155.10±2.72kg and 163.80±2.53 and for T1 and T2 calves, respectively (P0.05. Thus, creep feeding can improve WW and preweaning ADG of Nelore calves but may not affect pregnancy rate of primiparous cows in low body condition at the start of the mating.
The Optimal Selection for Restricted Linear Models with Average Estimator
Directory of Open Access Journals (Sweden)
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Energy Technology Data Exchange (ETDEWEB)
Dyk, Pawel; Jiang, Naomi; Sun, Baozhou; DeWees, Todd A. [Department of Radiation Oncology, Washington University School of Medicine, St Louis, Missouri (United States); Fowler, Kathryn J.; Narra, Vamsi [Department of Diagnostic Radiology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri (United States); Garcia-Ramirez, Jose L.; Schwarz, Julie K. [Department of Radiation Oncology, Washington University School of Medicine, St Louis, Missouri (United States); Grigsby, Perry W., E-mail: pgrigsby@wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, St Louis, Missouri (United States); Division of Nuclear Medicine, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri (United States); Division of Gynecologic Oncology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri (United States); Alvin J. Siteman Cancer Center, Washington University School of Medicine, St Louis, Missouri (United States)
2014-11-15
Purpose: Magnetic resonance imaging/diffusion weighted-imaging (MRI/DWI)-guided high-dose-rate (HDR) brachytherapy and {sup 18}F-fluorodeoxyglucose (FDG) — positron emission tomography/computed tomography (PET/CT)-guided intensity modulated radiation therapy (IMRT) for the definitive treatment of cervical cancer is a novel treatment technique. The purpose of this study was to report our analysis of dose-volume parameters predicting gross tumor volume (GTV) control. Methods and Materials: We analyzed the records of 134 patients with International Federation of Gynecology and Obstetrics stages IB1-IVB cervical cancer treated with combined MRI-guided HDR and IMRT from July 2009 to July 2011. IMRT was targeted to the metabolic tumor volume and lymph nodes by use of FDG-PET/CT simulation. The GTV for each HDR fraction was delineated by use of T2-weighted or apparent diffusion coefficient maps from diffusion-weighted sequences. The D100, D90, and Dmean delivered to the GTV from HDR and IMRT were summed to EQD2. Results: One hundred twenty-five patients received all irradiation treatment as planned, and 9 did not complete treatment. All 134 patients are included in this analysis. Treatment failure in the cervix occurred in 24 patients (18.0%). Patients with cervix failures had a lower D100, D90, and Dmean than those who did not experience failure in the cervix. The respective doses to the GTV were 41, 58, and 136 Gy for failures compared with 67, 99, and 236 Gy for those who did not experience failure (P<.001). Probit analysis estimated the minimum D100, D90, and Dmean doses required for ≥90% local control to be 69, 98, and 260 Gy (P<.001). Conclusions: Total dose delivered to the GTV from combined MRI-guided HDR and PET/CT-guided IMRT is highly correlated with local tumor control. The findings can be directly applied in the clinic for dose adaptation to maximize local control.
Directory of Open Access Journals (Sweden)
Josy Davidson
2008-03-01
Full Text Available OBJETIVO: Verificar se a freqüência respiratória (FR, o volume corrente (VC e a relação FR/VC poderiam prever a falha na extubação em recém-nascidos de muito baixo peso submetidos à ventilação mecânica. MÉTODOS: Estudo prospectivo, observacional, de recém-nascidos com idade gestacional OBJECTIVE: To verify if respiratory rate (RR, tidal volume (TV and respiratory rate and tidal volume ratio (RR/TV could predict extubation failure in very low birth weight infants submitted to mechanical ventilation. METHODS: This prospective observational study enrolled newborn infants with gestational age <37 weeks and birth weight <1,500g, mechanically ventilated from birth during 48 hours to 30 days and thought to be ready for extubation. As soon as the physicians decided for extubation, the neonates received endotracheal continuous positive airway pressure (CPAP for 10 minutes while spontaneous RR, TV and RR/TV were measured using a fixed-orifice pneumotachograph positioned between the endotracheal tube and the ventilator circuit. Thereafter, the neonates were extubated to nasal CPAP. Extubation failure was defined as the need for reintubation within 48 hours. RESULTS: Of the 35 studied infants, 20 (57% were successfully extubated and 15 (43% required reintubation. RR and RR/TV before extubation had a trend to be higher in unsuccessfully extubated infants. TV was similar in both groups. Sensitivity and specificity of these parameters as predictors of extubation failure were 50 and 67% respectively for RR, 40 and 67% for TV and 40 and 73% for RR/TV. CONCLUSIONS: RR, TV and RR/TV showed low sensitivity and specificity to predict extubation failure in mechanically ventilated very low birth weight infants.
Directory of Open Access Journals (Sweden)
D. D'avila Balbé
2007-02-01
Full Text Available Foram analisados os registros de ganho médio diário entre a desmama e o sobreano (GMDDS de 33.267 animais de uma população multirracial Angus - Nelore, filhos de 525 touros, criados em 37 rebanhos, em diversas regiões do Brasil, entre os anos de 1987 e 2001. O modelo animal usado incluiu os efeitos aleatórios genético aditivo direto e materno e residual e os efeitos fixos de grupo genético do pai, da mãe e do animal e do grupo de contemporâneos pós-desmama, além da covariável idade à desmama, não ajustada. O GMDDS médio observado para a população foi 384,22g. O ano de 1999 foi o que apresentou o maior GMDDS (484,04g, e o ano de 1992, o menor (299,42g. Os coeficientes de herdabilidade estimados foram: 0,30±0,11 (direta e 0,29±0,07 (materna. O VG médio foi de -0,827g. A tendência genética estimada para essa característica foi de -0,029g/ano (PDirect and maternal heritability coefficients were estimated and genetic and phenotypic trends were predicted for average weight gain from weaning to 550 days of age (AWG from 33,267 animals of a multi-breed Angus-Nellore population, sired by 525 bulls and raised in 37 herds in several regions of Brazil, from 1987 to 2001. MTDFREML was used for estimating the (covariance components utilized to estimate the genetic direct and maternal heritability coefficients and to predict the breeding values. The animal model included as fixed the genetic group of sire, dam and animal and the pos weaning year/station/herd contemporary group and the covariate weaning age, and as ramdom, the additive genetic, maternal and residual effects. The observed AWG was 384.22g, 1999 presented the highest (484.04g and 1992 the lowest value (299.42g. The direct heritability was 0.30±0.11, the maternal h² was 0.29±0.07 and the average genetic value was -0.827g. The estimated genetic trend for AWG was -0.029g (P<0.08 and the phenotypic trend was 5.68g (P<0.05. A phenotypic progress for average weight gain from
Gaussian moving averages and semimartingales
DEFF Research Database (Denmark)
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
Energy Technology Data Exchange (ETDEWEB)
Gosling, O., E-mail: Oliver.gosling@pms.ac.u [Plymouth Hospitals NHS Trust, Derriford Hospital, Plymouth, Devon (United Kingdom); Loader, R.; Venables, P.; Rowles, N.; Morgan-Hughes, G. [Plymouth Hospitals NHS Trust, Derriford Hospital, Plymouth, Devon (United Kingdom); Roobottom, C. [Peninsula College of Medicine and Dentistry, University of Plymouth, Devon (United Kingdom)
2010-12-15
Aim: To calculate the effective dose from cardiac multidetector computed tomography (MDCT) using a computer-based model utilizing the latest International Commission on Radiation Protection (ICRP) 103 tissue-weighting factors (2007), to compare this dose with those calculated with previously published chest conversion factors and to produce a conversion factor specific for cardiac MDCT. Materials and methods: An observational study of 152 patients attending for cardiac MDCT as part of their usual clinical care in a university teaching hospital. The dose for each examination was calculated using the computer-based anthropomorphic ImPACT model (the imaging performance assessment of CT scanners) and this was compared with the dose derived from the dose-length product (DLP) and a chest conversion factor. Results: The median effective dose calculated using the ImPACT calculator (4.5 mSv) was significantly higher than the doses calculated with the chest conversion factors (2.2-3 mSv). Conclusion: The use of chest conversion factors significantly underestimates the effective dose when compared to the dose calculated using the ImPACT calculator. A conversion factor of 0.028 would give a better estimation of the effective dose from prospectively gated cardiac MDCT.
Ackerman, Margareta; Branzei, Simina; Loker, David
2011-01-01
In this paper we investigate clustering in the weighted setting, in which every data point is assigned a real valued weight. We conduct a theoretical analysis on the influence of weighted data on standard clustering algorithms in each of the partitional and hierarchical settings, characterising the precise conditions under which such algorithms react to weights, and classifying clustering methods into three broad categories: weight-responsive, weight-considering, and weight-robust. Our analysis raises several interesting questions and can be directly mapped to the classical unweighted setting.
Averaged Extended Tree Augmented Naive Classifier
Directory of Open Access Journals (Sweden)
Aaron Meehan
2015-07-01
Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Jerome, N. P.; d'Arcy, J. A.; Feiweier, T.; Koh, D.-M.; Leach, M. O.; Collins, D. J.; Orton, M. R.
2016-12-01
The bi-exponential intravoxel-incoherent-motion (IVIM) model for diffusion-weighted MRI (DWI) fails to account for differential T 2 s in the model compartments, resulting in overestimation of pseudodiffusion fraction f. An extended model, T2-IVIM, allows removal of the confounding echo-time (TE) dependence of f, and provides direct compartment T 2 estimates. Two consented healthy volunteer cohorts (n = 5, 6) underwent DWI comprising multiple TE/b-value combinations (Protocol 1: TE = 62-102 ms, b = 0-250 mm-2s, 30 combinations. Protocol 2: 8 b-values 0-800 mm-2s at TE = 62 ms, with 3 additional b-values 0-50 mm-2s at TE = 80, 100 ms scanned twice). Data from liver ROIs were fitted with IVIM at individual TEs, and with the T2-IVIM model using all data. Repeat-measures coefficients of variation were assessed for Protocol 2. Conventional IVIM modelling at individual TEs (Protocol 1) demonstrated apparent f increasing with longer TE: 22.4 ± 7% (TE = 62 ms) to 30.7 ± 11% (TE = 102 ms) T2-IVIM model fitting accounted for all data variation. Fitting of Protocol 2 data using T2-IVIM yielded reduced f estimates (IVIM: 27.9 ± 6%, T2-IVIM: 18.3 ± 7%), as well as T 2 = 42.1 ± 7 ms, 77.6 ± 30 ms for true and pseudodiffusion compartments, respectively. A reduced Protocol 2 dataset yielded comparable results in a clinical time frame (11 min). The confounding dependence of IVIM f on TE can be accounted for using additional b/TE images and the extended T2-IVIM model.
Institute of Scientific and Technical Information of China (English)
常志远; 孙金生
2016-01-01
为解决指数加权平滑(EWMA)控制图惯性问题而提出的自适应EWMA(adaptive EWMA, AEWMA)控制图的统计特性已经被广泛研究,但AEWMA控制图经济特性的研究却从未见有成果发表。针对该问题,在考虑Taguchi损失函数的基础上,给出了AEWMA控制图经济统计设计的模型。提出了一种在偏移区间上对AEWMA控制图进行优化设计的方法,用该方法优化设计的AEWMA控制图与针对固定偏移优化设计的EWMA控制图进行了比较。结果表明该方法设计的AEWMA控制图仍然保持其解决EWMA控制图惯性问题的特性, AEWMA控制图的经济特性同样优于EWMA控制图。分析了AEWMA控制图经济统计设计的参数灵敏度,总结了AEWMA控制图的参数变化与损失、平均链长以及最优参数组合之间的关系。%The statistical properties of adaptive exponentially weighted moving average (AEWMA) control chart, which is proposed to deal with the inertia problem of EWMA control chart, have been thoroughly investigated by many authors. However, the research results on economic properties of AEWMA control chart have never been reported in the publica-tions. In this paper, an economic-statistical design model based on Taguchi loss function is proposed for AEWMA control chart. The optimal algorithm based on the range of shift is developed for economic-statistical design of AEWMA chart. The effectiveness of the optimal algorithm is validated by the comparison between AEWMA chart and EWMA chart. The comparison results show that the designed AEWMA chart is still able to deal with the inertia problem of EWMA chart, and the economic properties of AEWMA chart outperforms the EWMA chart. Finally, the sensitivity analysis of AEWMA chart is performed, and the relationship between parameter changes and cost, average run length and optimal decision variables of AEWMA chart are summarized respectively.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
DEFF Research Database (Denmark)
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
... baby, taken just after he or she is born. A low birth weight is less than 5.5 pounds. A high ... weight is more than 8.8 pounds. A low birth weight baby can be born too small, too early (premature), or both. This ...
Energy Technology Data Exchange (ETDEWEB)
Boutilier, J; Chan, T; Lee, T [University of Toronto, Toronto, Ontario (Canada); Craig, T; Sharpe, M [University of Toronto, Toronto, Ontario (Canada); The Princess Margaret Cancer Centre - UHN, Toronto, ON (Canada)
2014-06-15
Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the left femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.
Weight loss, weight maintenance, and adaptive thermogenesis.
Camps, Stefan G J A; Verhoef, Sanne P M; Westerterp, Klaas R
2013-05-01
Diet-induced weight loss is accompanied by adaptive thermogenesis, ie, a disproportional or greater than expected reduction of resting metabolic rate (RMR). The aim of this study was to investigate whether adaptive thermogenesis is sustained during weight maintenance after weight loss. Subjects were 22 men and 69 women [mean ± SD age: 40 ± 9 y; body mass index (BMI; in kg/m(2)): 31.9 ± 3.0]. They followed a very-low-energy diet for 8 wk, followed by a 44-wk period of weight maintenance. Body composition was assessed with a 3-compartment model based on body weight, total body water (deuterium dilution), and body volume. RMR was measured (RMRm) with a ventilated hood. In addition, RMR was predicted (RMRp) on the basis of the measured body composition: RMRp (MJ/d) = 0.024 × fat mass (kg) + 0.102 × fat-free mass (kg) + 0.85. Measurements took place before the diet and 8, 20, and 52 wk after the start of the diet. The ratio of RMRm to RMRp decreased from 1.004 ± 0.077 before the diet to 0.963 ± 0.073 after the diet (P after 20 wk (0.983 ± 0.063; P weight loss after 8 wk (P Weight loss results in adaptive thermogenesis, and there is no indication for a change in adaptive thermogenesis up to 1 y, when weight loss is maintained. This trial was registered at clinicaltrials.gov as NCT01015508.
ORDERED WEIGHTED DISTANCE MEASURE
Institute of Scientific and Technical Information of China (English)
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
High average power supercontinuum sources
Indian Academy of Sciences (India)
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
Directory of Open Access Journals (Sweden)
Daniel Albiero
2012-03-01
Full Text Available Conceitos de qualidade cada vez mais se tornam essenciais para a sobrevivência da empresa agrícola, pois a importância do aprimoramento das operações agrícolas se faz necessária para a obtenção de resultados viáveis economicamente, ambientamente e socialmente. Uma das dimensões da qualidade é conseguir de conformidade, ou seja, a garantia de execução exata do que foi planejado para atender aos requisitos dos clientes em relação a um determinado produto ou serviço. Os objetivos deste trabalho são avaliar a distribuição longitudinal entre sementes de uma semeadora de anel interno rotativo, e propor a utilização da metodologia estatística da Média Móvel Exponencialmente Ponderada (MMEP como alternativa para o controle de qualidade da semeadura, quando não há normalidade da distribuição dos dados. Os resultados demonstraram que a MMEP é adequada para a avaliação da qualidade da distribuição longitudinal de sementes, pois concordou com os dados apresentados na estatística descritiva, o que lhe credencia para avaliação de distribuições não normais.Quality concepts are essentials for survivor of agricultural companies, therefore, the importancy of improvement of agricultural process is necessary for to get results economically, environmentally and socially viables. One quality dimension is to get a compliance quality, ie, ensure the exact execution than was planned. The subject of this paper is evaluable at longitudinal distribution between seed of a internal ring seeder. The subject of this paper is to evaluate at longitudinal distribution between seed distributed for a internal ring seeder and to propose the use of statistical methodology exponentially weighted moving average (MMEP like alternative for the quality control of seeders, when there is not normality in data. The results showed that the MMEP is adequate for quality evaluation of longitudinal distribution between seeds, as agreed with the data of
Hein, M.; Wieneke, B.; Seemann, R.
2013-01-01
Micro-PIV (μPIV) uses volume-illumination and imaging of fluorescent tracer particles through a single microscope objective. Displacement fields measured by image correlation depend on all imaged particles, including defocused particles. The measured in-plane displacement is a weighted average of th
Ensemble average theory of gravity
Khosravi, Nima
2016-12-01
We put forward the idea that all the theoretically consistent models of gravity have contributions to the observed gravity interaction. In this formulation, each model comes with its own Euclidean path-integral weight where general relativity (GR) has automatically the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of the f (R ,G ) model. This specific f (R ,G ) satisfies the stability conditions and possesses self-accelerating solutions. Our model is consistent with the local tests of gravity since its behavior is the same as in GR for the high-curvature regime. In the low-curvature regime the gravitational force is weaker than in GR, which can be interpreted as the existence of a repulsive fifth force for very large scales. Interestingly, there is an intermediate-curvature regime where the gravitational force is stronger in our model compared to GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes makes it observationally distinguishable from Λ CDM .
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
What determines hatchling weight: breeder age or incubated egg weight?
Directory of Open Access Journals (Sweden)
AB Traldi
2011-12-01
Full Text Available Two experiments were carried out to determine which factor influences weight at hatch of broiler chicks: breeder age or incubated egg weight. In Experiment 1, 2340 eggs produced by 29- and 55-week-old Ross® broiler breeders were incubated. The eggs selected for incubation weighed one standard deviation below and above average egg weight. In Experiment 2, 2160 eggs weighing 62 g produced by breeders of both ages were incubated. In both experiments, 50 additional eggs within the weight interval determined for each breeder age were weighed, broken, and their components were separated and weighed. At hatch, hatchlings were sexed and weighed, determining the average initial weight of the progeny of each breeder age. Data were analyzed using the Analyst program of SAS® software package. In Experiment 1, the weight difference between eggs produced by young and mature breeders was 10.92 g, and the component that mostly influenced this difference was the yolk (7.51 g heavier in mature breeders, compared with 4.23 g difference in albumen and 0.8 g in eggshell weights. Hatchling weight difference was 9.4 g higher in eggs from mature breeders. In Experiment 2, egg weight difference was only 0.74 g, but yolk weight was 4.59 g higher in the eggs of mature breeders. The results obtained in the present study indicate that hatchling weight is influenced by egg weight, and not by breeder age.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
the proposition of a weight for averaging CDMA codes. This weighting function is referred in this discussion as the probability of the code matrix...Given a likelihood function of a multivariate Gaussian stochastic process (12), one can assume the values L and U and try to estimate the parameters...such as the average of the exponential functions were formulated. Averaging over a weight that depends on the TSC behaves as a filtering process where
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
2010-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...: ER26FE07.012 Where: Bavg = Average benzene concentration for the applicable averaging period (volume...
Institute of Scientific and Technical Information of China (English)
李日; 王健; 周黎明; 潘红
2014-01-01
采用欧拉方法和体积平均思想，建立了以液相为主相、等轴晶和柱状晶视为两类不同第二相的三相模型，耦合凝固过程质量、动量、能量、溶质的守恒方程和晶粒的传输方程.以Al-4.7 wt.%Cu二元合金铸锭为例，模拟了合金铸锭二维的流场、温度场、溶质场、柱状晶向等轴晶转变过程以及等轴晶的沉积过程，并将模拟的铸锭组织和偏析结果与实验所得结果对比.温度场、流场和组织的模拟结果与理论基本一致，但由于模型没有考虑收缩以及浇注时的强迫对流，导致铸锭外层的偏析模拟值比实测值低，内层的模拟值比实测值高.所以收缩和逆偏析在模拟中是不可忽略的，这也是本文模型的改进方向.另外在所得模拟结果的基础上分析了体积平均法计算铸锭凝固过程的优点和不足之处.%Adopting the Euler and the volume averaging methods, a three-phase mathematical model with parent melt as the primary phase, columnar dendrites and equiaxed grains as two different secondary phases is developed, and the coupled macroscopic mass, momentum, energy and species conservation equations are obtained separately. Taking the Al-4.7 wt%Cu binary alloy ingots for example, the flow field, temperature field, solute field, columnar-to-equiaxed-transition and grain sedimentation in two-dimension are simulated, and the simulated result of ingot and macrosegregation result are compared with their experimental values. The simulation results of temperature field, flow field and structure are basically consistent with the theoretical results, but the result of solute field shows that the simulated values is lower than the measured value on the edge, this is because the model does not take the shrinkage and forced convection into account, and the inner results is higher than the results on edge. The shrinkage and inverse segregation therefore should not be neglected. This model are still
Weight and weddings. Engaged men's body weight ideals and wedding weight management behaviors.
Klos, Lori A; Sobal, Jeffery
2013-01-01
Most adults marry at some point in life, and many invest substantial resources in a wedding ceremony. Previous research reports that brides often strive towards culturally-bound appearance norms and engage in weight management behaviors in preparation for their wedding. However, little is known about wedding weight ideals and behaviors among engaged men. A cross-sectional survey of 163 engaged men asked them to complete a questionnaire about their current height and weight, ideal wedding body weight, wedding weight importance, weight management behaviors, formality of their upcoming wedding ceremony, and demographics. Results indicated that the discrepancy between men's current weight and reported ideal wedding weight averaged 9.61 lb. Most men considered being at a certain weight at their wedding to be somewhat important. About 39% were attempting to lose weight for their wedding, and 37% were not trying to change their weight. Attempting weight loss was more frequent among men with higher BMI's, those planning more formal weddings, and those who considered being the right weight at their wedding as important. Overall, these findings suggest that weight-related appearance norms and weight loss behaviors are evident among engaged men. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Molecular Weight Distribution of Polymer Samples
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
A database of age-appropriate average MRI templates.
Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze
2016-01-01
This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).
Energy Technology Data Exchange (ETDEWEB)
Wong, O; Lo, G; Yuan, J; Law, M; Ding, A; Cheng, K; Chan, K; Cheung, K; Yu, S [Hong Kong Sanatorium & Hospital, Hong Kong (Hong Kong)
2015-06-15
Purpose: There is growing interests in applying MR-simulator(MR-sim) in radiotherapy but MR images subject to hardware, patient and pulse sequence dependent geometric distortion that may potentially influence target definition. This study aimed to evaluate the influence on head-and-neck tissue delineation, in terms of positional and volumetric variability, of two T1-weighted(T1w) MR sequences on a 1.5T MR-sim Methods: Four healthy volunteers were scanned (4 scans for each on different days) using both spin-echo (3DCUBE, TR/TE=500/14ms, TA=183s) and gradient-echo sequences (3DFSPGR, TE/TR=7/4ms, TA=173s) with identical coverage, voxel-size(0.8×0.8×1.0mm3), receiver-bandwidth(62.5kHz/pix) and geometric correction on a 1.5T MR-sim immobilized with personalized thermoplastic cast and head-rest. Under this setting, similar T1w contrast and signal-to-noise ratio were obtained, and factors other than sequence that might bias image distortion and tissue delineation were minimized. VOIs of parotid gland(PGR, PGL), pituitary gland(PIT) and eyeballs(EyeL, EyeR) were carefully drawn, and inter-scan coefficient-of-variation(CV) of VOI centroid position and volume were calculated for each subject. Mean and standard deviation(SD) of the CVs for four subjects were compared between sequences using Wilcoxon ranksum test. Results: The mean positional(<4%) and volumetric(<7%) CVs varied between tissues, majorly dependent on tissue inherent properties like volume, location, mobility and deformability. Smaller mean volumetric CV was found in 3DCUBE, probably due to its less proneness to tissue susceptibility, but only PGL showed significant difference(P<0.05). Positional CVs had no significant differences for all VOIs(P>0.05) between sequences, suggesting volumetric variation might be more sensitive to sequence-dependent delineation difference. Conclusion: Although 3DCUBE is considered less prone to tissue susceptibility-induced artifact and distortion, our preliminary data showed
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling
Vrugt, J.A.; Diks, C.G.H.; Clark, M.
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t
Energy Technology Data Exchange (ETDEWEB)
Server, Andres; Nakstad, Per H. [Oslo University Hospital-Ullevaal, Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Oslo (Norway); University of Oslo, Oslo (Norway); Orheim, Tone E.D. [Oslo University Hospital, Interventional Centre, Oslo (Norway); Graff, Bjoern A. [Oslo University Hospital-Ullevaal, Department of Radiology and Nuclear Medicine, Oslo (Norway); Josefsen, Roger [Oslo University Hospital-Ullevaal, Department of Neurosurgery, Oslo (Norway); Kumar, Theresa [Oslo University Hospital-Ullevaal, Department of Pathology, Oslo (Norway)
2011-05-15
Conventional magnetic resonance (MR) imaging has limited capacity to differentiate between glioblastoma multiforme (GBM) and metastasis. The purposes of this study were: (1) to compare microvascular leakage (MVL), cerebral blood volume (CBV), and blood flow (CBF) in the distinction of metastasis from GBM using dynamic susceptibility-weighted contrast-enhanced perfusion MR imaging (DSC-MRI), and (2) to estimate the diagnostic accuracy of perfusion and permeability MR imaging. A prospective study of 61 patients (40 GBMs and 21 metastases) was performed at 3 T using DSC-MRI. Normalized rCBV and rCBF from tumoral (rCBVt, rCBFt), peri-enhancing region (rCBVe, rCBFe), and by dividing the value in the tumor by the value in the peri-enhancing region (rCBVt/e, rCBFt/e), as well as MVL were calculated. Hemodynamic and histopathologic variables were analyzed statistically and Spearman/Pearson correlations. Receiver operating characteristic curve analysis was performed for each of the variables. The rCBVe, rCBFe, and MVL were significantly greater in GBMs compared with those of metastases. The optimal cutoff value for differentiating GBM from metastasis was 0.80 which implies a sensitivity of 95%, a specificity of 92%, a positive predictive value of 86%, and a negative predictive value of 97% for rCBVe ratio. We found a modest correlation between rCBVt and rCBFt ratios. MVL measurements in GBMs are significantly higher than those in metastases. Statistically, both rCBVe, rCBVt/e and rCBFe, rCBFt/e were useful in differentiating between GBMs and metastases, supporting the hypothesis that perfusion MR imaging can detect infiltration of tumor cells in the peri-enhancing region. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Server, Andres; Nakstad, Per H. [Oslo University Hospital-Ullevaal, Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Oslo (Norway); University of Oslo, Oslo (Norway); Graff, Bjoern A. [Oslo University Hospital-Ullevaal, Department of Radiology and Nuclear Medicine, Oslo (Norway); Orheim, Tone E.D.; Gadmar, Oeystein B. [Oslo University Hospital, Interventional Centre, Oslo (Norway); Schellhorn, Till [Oslo University Hospital-Ullevaal, Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Oslo (Norway); Josefsen, Roger [Oslo University Hospital-Ullevaal, Department of Neurosurgery, Oslo (Norway)
2011-06-15
To assess the diagnostic accuracy of microvascular leakage (MVL), cerebral blood volume (CBV) and blood flow (CBF) values derived from dynamic susceptibility-weighted contrast-enhanced perfusion MR imaging (DSC-MR imaging) for grading of cerebral glial tumors, and to estimate the correlation between vascular permeability/perfusion parameters and tumor grades. A prospective study of 79 patients with cerebral glial tumors underwent DSC-MR imaging. Normalized relative CBV (rCBV) and relative CBF (rCBF) from tumoral (rCBVt and rCBFt), peri-enhancing region (rCBVe and rCBFe), and the value in the tumor divided by the value in the peri-enhancing region (rCBVt/e and rCBFt/e), as well as MVL, expressed as the leakage coefficient K{sub 2} were calculated. Hemodynamic variables and tumor grades were analyzed statistically and with Pearson correlations. Receiver operating characteristic (ROC) curve analyses were also performed for each of the variables. The differences in rCBVt and the maximum MVL (MVL{sub max}) values were statistically significant among all tumor grades. Correlation analysis using Pearson was as follows: rCBVt and tumor grade, r = 0.774; rCBFt and tumor grade, r = 0.417; MVL{sub max} and tumor grade, r = 0.559; MVL{sub max} and rCBVt, r = 0.440; MVL{sub max} and rCBFt, r = 0.192; and rCBVt and rCBFt, r = 0.605. According to ROC analyses for distinguishing tumor grade, rCBVt showed the largest areas under ROC curve (AUC), except for grade III from IV. Both rCBVt and MVL{sub max} showed good discriminative power in distinguishing all tumor grades. rCBVt correlated strongly with tumor grade; the correlation between MVL{sub max} and tumor grade was moderate. (orig.)
Diffusion-weighted MR imaging of the normal fetal lung
Energy Technology Data Exchange (ETDEWEB)
Balassy, Csilla; Kasprian, Gregor; Weber, Michael; Hoermann, Marcus; Bankier, Alexander; Herold, Christian J.; Prayer, Daniela [Medical University of Vienna, Department of Radiology, Vienna (Austria); Brugger, Peter C. [Medical University of Vienna, Center of Anatomy and Cell Biology, Vienna (Austria); Csapo, Bence [Medical University of Vienna, Department of Obstetrics and Gyneocology, Vienna (Austria); Bammer, Roland [University of Stanford, Department of Radiology, Stanford, CA (United States)
2008-04-15
To quantify apparent diffusion coefficient (ADC) changes in fetuses with normal lungs and to determine whether ADC can be used in the assessment of fetal lung development. In 53 pregnancies (20-37th weeks of gestation), we measured ADC on diffusion-weighted imaging (DWI) in the apical, middle, and basal thirds of the right lung. ADCs were correlated with gestational age. Differences between the ADCs were assessed. Fetal lung volumes were measured on T2-weighted sequences and correlated with ADCs and with age. ADCs were 2.13 {+-} 0.44 {mu}m{sup 2}/ms (mean {+-} SD) in the apex, 1.99 {+-} 0.42 {mu}m{sup 2}/ms (mean {+-} SD) in the middle third, and 1.91 {+-} 0.41 {mu}m{sup 2}/ms (mean {+-} SD) in the lung base. Neither the individual ADC values nor average ADC values showed a significant correlation with gestational age or with lung volumes. Average ADCs decreased significantly from the lung apex toward the base. Individual ADCs showed little absolute change and heterogeneity. Lung volumes increased significantly during gestation. We have not been able to identify a pattern of changes in the ADC values that correlate with lung maturation. Furthermore, the individual, gravity-related ADC changes are subject to substantial variability and show nonuniform behavior. ADC can therefore not be used as an indicator of lung maturity. (orig.)
Diffusion-weighted MR imaging of the normal fetal lung.
Balassy, Csilla; Kasprian, Gregor; Brugger, Peter C; Csapo, Bence; Weber, Michael; Hörmann, Marcus; Bankier, Alexander; Bammer, Roland; Herold, Christian J; Prayer, Daniela
2008-04-01
To quantify apparent diffusion coefficient (ADC) changes in fetuses with normal lungs and to determine whether ADC can be used in the assessment of fetal lung development. In 53 pregnancies (20-37th weeks of gestation), we measured ADC on diffusion-weighted imaging (DWI) in the apical, middle, and basal thirds of the right lung. ADCs were correlated with gestational age. Differences between the ADCs were assessed. Fetal lung volumes were measured on T2-weighted sequences and correlated with ADCs and with age. ADCs were 2.13 +/- 0.44 microm(2)/ms (mean +/- SD) in the apex, 1.99 +/- 0.42 microm(2)/ms (mean +/- SD) in the middle third, and 1.91 +/- 0.41 microm(2)/ms (mean +/- SD) in the lung base. Neither the individual ADC values nor average ADC values showed a significant correlation with gestational age or with lung volumes. Average ADCs decreased significantly from the lung apex toward the base. Individual ADCs showed little absolute change and heterogeneity. Lung volumes increased significantly during gestation. We have not been able to identify a pattern of changes in the ADC values that correlate with lung maturation. Furthermore, the individual, gravity-related ADC changes are subject to substantial variability and show nonuniform behavior. ADC can therefore not be used as an indicator of lung maturity.
Cellular Automaton Simulation For Volume Changes Of Solidifying Nodular Cast Iron
Directory of Open Access Journals (Sweden)
Burbelko A.
2015-09-01
Full Text Available Volume changes of the binary Fe-C alloy with nodular graphite were forecast by means of the Cellular Automaton Finite Differences (CA-FD model of solidification. Simulations were performed in 2D space for differing carbon content. Dependences of phase density on temperature were considered in the computations; additionally density of the liquid phase and austenite were deemed as a function of carbon concentration. Changes of the specific volume were forecast on the base of the phase volume fractions and changes of phase density. Density of modeled material was calculated as weighted average of densities of each phase.
Institute of Scientific and Technical Information of China (English)
Chang-chiChu; JamesS.Buckner; KamilKarut; ThomasP.Freeman; DennisR.Nelson; ThomasJ.Henneberryl
2003-01-01
Size and weight measurements were made for all the life stages of Bemisia tabaci (Gennadius) B biotype from field grown cotton ( Gossypium hirsutum L. ) and cantaloupe ( Cucumis melo L., var. cantalupensis )in Phoenix, AZ and Fargo, ND, USA in 2000 and 2001. Nymphal volumes were derived from the measurements.The average nymphal volume increase for settled 1 st to the late 4th instar was exponential. The greatest increase in body volume occurred during development from the 3rd to early 4th instar. Nymphs on cotton leaves were wider,but not longer compared with those on cantaloupe. Ventral and dorsal depth ratios of nymphal bodies from 1st tolate 4th instars from cantaloupe leaves were significantly greater compared with those from cotton leaves. During nymphal development from 1st to 4th instar, the average (from the two host species) ventral body half volume in-creased by nearly 51 times compared with an increase of 28 times for the dorsal body half volume. Adult female and male average lengths, from heads to wing tips, were 1 126 μm and 953 μm, respectively. Average adult fe-male and male weights were 39 and 17 μg, respectively. Average widths, lengths, and weights of eggs from cottonand cantaloupe were, 99 μm, 197 μm, and 0.8 μg, respectively. Average widths, lengths, and weights for exu-viae of non-parasitized nymphs from both cotton and cantaloupe were 492 μm, 673 μm, and 1.20 μg, respective-ly; and widths, lengths, and weights of parasitized nymph exuviae were 452 μm, 665 μm, and 3.62 μg, respec-tively. Both exuviae from non-parasitized and parasitized nymphs from cotton leaves were wider, longer, and heavier than those from cantaloupe leaves.
A procedure to average 3D anatomical structures.
Subramanya, K; Dean, D
2000-12-01
Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku
2015-03-01
Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.
On the average uncertainty for systems with nonlinear coupling
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
Institute of Scientific and Technical Information of China (English)
韩若冰; 任刚; 王轩; 刘晨; 夏廷毅; 于会明
2016-01-01
目的 通过对比研究增强CT及DWMRI在胰腺恶性肿瘤体积大小、肝脏及区域淋巴结转移瘤的差异,指导临床放疗实践.方法 计划入组40例胰腺癌患者,均行增强CT及DWMRI定位扫描,后依据不同图像进行靶区体积勾画、肿瘤最大截面长径测量、肝转移瘤及5～8 mm、＞8 mm淋巴结转移瘤的测量.分别使用配对t检验或配对Wilcoxon秩和检验进行分析.结果 基于增强CT、DWMRI所勾画的GTV平均值分别为54.95、41.67 cm3(P=0.000),肿瘤最大界面长径平均值分别为4.18、3.94 cm (P=0.000),其中2例dCT小于dDWMRI.依据增强CT、DWMRI图像分别检出肝脏转移瘤83、112个,增强CT检出量占DWMRI检出的74％;＞8 mm淋巴结转移瘤分别检出46、56个,5～8mm淋巴结分别检出103、200个,增强CT检出量分别占DWMRI检出的82％、52％.结论 基于DWMRI测量的GTV及肿瘤最大界面长径较增强CT小,在肝脏及区域淋巴结转移瘤检出较增强CT敏感.然而还需要进一步参照病理对照试验证实.%Objective To investigate the differences in tumor volume and metastatic tumors of the liver and regional lymph nodes between contrast-enhanced computed tomography (CT) and diffusion-weighted magnetic resonance imaging (DWMRI) through a comparative analysis,as well the useful information for target volume delineation,and to guide radiotherapy in clinical practice.Methods A total of 40 patients with pancreatic cancer were enrolled and underwent contrast-enhanced CT and DWMRI in the same position.The target volume was delineated,the major axis of the maximum tumor section was measured,and the numbers of liver metastatic tumors and metastatic tumors of the lymph nodes with a diameter of 5-8 mm or＞8 mm were measured based on the CT and DWMRI images.The analysis was performed by using paired t-test or paired Wilcoxon rank sum test.Results The mean gross tumor volume (GTV) delineated by contrast-enhanced CT and DWMRI was 54.95 cm3 and 41
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Light shift averaging in paraffin-coated alkali vapor cells
Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry
2015-01-01
Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.
Modification of averaging process in GR: Case study flat LTB
Khosravi, Shahram; Mansouri, Reza
2007-01-01
We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.
Estimating liver weight of adults by body weight and gender
Institute of Scientific and Technical Information of China (English)
See Ching Chan; Chi Leung Liu; Chung Mau Lo; Banny K Lam; Evelyn W Lee; Yik Wong; Sheung Tat Fan
2006-01-01
AIM: To estimate the standard liver weight for assessing adequacies of graft size in live donor liver transplantation and remnant liver in major hepatectomy for cancer.METHODS: In this study, anthropometric data of body weight and body height were tested for a correlation with liver weight in 159 live liver donors who underwent donor right hepatectomy including the middle hepatic vein. Liver weights were calculated from the right lobe graft weight obtained at the back table, divided by the proportion of the right lobe on the computed tomography.RESULTS: The subjects, all Chinese, had a mean age of 35.8 ± 10.5 years, and a female to male ratio of 118:41. The mean volume of the right lobe was 710.14 ±131.46 mL and occupied 64.55%±4.47% of the whole liver on computed tomography. Right lobe weighed 598.90±117.39 g and the estimated liver weight was 927.54 ± 168.78 g. When body weight and body height were subjected to multiple stepwise linear regression analysis, body height was found to be insignificant. Females of the same body weight had a slightly lower liver weight. A formula based on body weight and gender was derived: Estimated standard liver weight (g) = 218 + BW (kg) x 12.3 + genderx 51 (R2 = 0.48)(female = 0, male = 1). Based on the anthropometric data of these 159 subjects, liver weights were calculated using previously published formulae derived from studies on Caucasian, Japanese, Korean, and Chinese.All formulae overestimated liver weights compared to this formula. The Japanese formula overestimated the estimated standard liver weight (ESLW) for adults less than 60 kg.CONCLUSION: A formula applicable to Chinese males and females is available. A formula for individual races appears necessary.
How do walking, standing, and resting influence transtibial amputee residual limb fluid volume?
Directory of Open Access Journals (Sweden)
Joan E. Sanders, PhD
2014-08-01
Full Text Available The purpose of this research was to determine how fluid volume changes in the residual limbs of people with transtibial amputation were affected by activity during test sessions with equal durations of resting, standing, and walking. Residual limb extracellular fluid volume was measured using biompedance analysis in 24 participants. Results showed that all subjects lost fluid volume during standing with equal weight-bearing, averaging a loss rate of –0.4%/min and a mean loss over the 25 min test session of 2.6% (standard deviation [SD] 1.1. Sixteen subjects gained limb fluid volume during walking (mean gain of 1.0% [SD 2.5], and fifteen gained fluid volume during rest (mean gain of 1.0% [SD 2.2]. Walking explained only 39.3% of the total session fluid volume change. There was a strong correlation between walk and rest fluid volume changes (−0.81. Subjects with peripheral arterial disease experienced relatively high fluid volume gains during sitting but minimal changes or losses during sit-to-stand and stand-to-sit transitioning. Healthy female subjects experienced high fluid volume changes during transitioning from sit-to-stand and stand-to-sit. The differences in fluid volume response among subjects suggest that volume accommodation technologies should be matched to the activity-dependent fluid transport characteristics of the individual prosthesis user.
Wever, R.; Boks, C.; Stevels, A.
2007-01-01
Traditionally packaging design-for-sustainability (DfS) strongly focuses on resource conservation and material recycling. The type and amount of materials used has been the driver in design. For consumer electronics (CE) products this weight-based approach is too limited; a volume-based approach is
Astuti, Valerio; Rovelli, Carlo
2016-01-01
Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states --with finite-volume individual nodes-- describing a region with total volume smaller than $V$, has \\emph{finite} dimension, bounded by $V \\log V$. This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.
Mass and volume of a body of young footballers
Directory of Open Access Journals (Sweden)
Smajić Miroslav
2012-01-01
Full Text Available Knowledge of the structure of some anthropological abilities and characteristics of sportsmen as well as their development represent the basic condition for successful management of the process of sports training. The aim of this research is to determine mass and volume of a body of young footballers. The sample of examinees consists of 120 footballers of different age categories from 'Vojvodina' football club, namely: junior pioneers (aged 11-12 - 30 examinees, senior pioneers (aged 13-14 - 30 examinees, cadets (aged 15-16 - 30 examinees and youth (aged 17-18 - 30 examinees. For transversal skeleton dimension, young footballers were measured for shoulder width and pelvic width. For the assessment of mass and volume of a body are measured body mass, the volume of upper leg, the volume of lower leg, the volume of chest, the volume of stomach, skin fold of tomach and skin fold of upper arm. The testing of significant differences between footballers of different age categories as well as deviation from expected values were calculated by 't-test' and univariate variance analysis (ANOVA. On the basis of the results obtained, it can be concluded that average results show a general tendency of increase of results of weight and volume of the body from younger to older age categories. Variable measures show that youth examinees are the most homogenous while senior pioneers are the most heterogenous.
Combining forecast weights: Why and how?
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Weighted OFDM for wireless multipath channels
DEFF Research Database (Denmark)
Prasad, Ramjee; Nikookar, H.
2000-01-01
In this paper the novel method of "weighted OFDM" is addressed. Different types of weighting factors (including Rectangular, Bartlett, Gaussian. Raised cosine, Half-sin and Shanon) are considered. The impact of weighting of OFDM on the peak-to-average power ratio (PAPR) is investigated by means...... of simulation and is compared for the above mentioned weighting factors. Results show that by weighting of the OFDM signal the PAPR reduces. Bit error performance of weighted multicarrier transmission over a multipath channel is also investigated. Results indicate that there is a trade off between PAPR...... reduction and bit error performance degradation by weighting....
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Estimation of Otoacoustic Emision Signals by Using Synchroneous Averaging Method
Directory of Open Access Journals (Sweden)
Linas Sankauskas
2011-08-01
Full Text Available The study presents the investigation results of synchronous averaging method and its application in estimation of impulse evoked otoacoustic emission signals (IEOAE. The method was analyzed using synthetic and real signals. Synthetic signals were modeled as the mixtures of deterministic component with noise realizations. Two types of noise were used: normal (Gaussian and transient impulses dominated (Laplacian. Signal to noise ratio was used as the signal quality measure after processing. In order to account varying amplitude of deterministic component in the realizations weighted averaging method was investigated. Results show that the performance of synchronous averaging method is very similar in case of both types of noise Gaussian and Laplacian. Weighted averaging method helps to cope with varying deterministic component or noise level in case of nonhomogenous ensembles as is the case in IEOAE signal.Article in Lithuanian
Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks
Jafarizadeh, Saber
2010-01-01
Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...
African Journals Online (AJOL)
PROF. EZECHUKWU
2013-11-24
Nov 24, 2013 ... sleep. There was also increase in both frequency of uri- nation and volume of urine voided; from1- 2 times to ... phagia, fever, head trauma, chronic cough, weight loss, ... water deprivation at 15Kg. Total volume of urine voided.
Level sets of multiple ergodic averages
Ai-Hua, Fan; Ma, Ji-Hua
2011-01-01
We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.
Xue, Ya-juan; Cao, Jun-xing; Du, Hao-kun; Zhang, Gu-lan; Yao, Yao
2016-09-01
Empirical mode decomposition (EMD)-based spectral decomposition methods have been successfully used for hydrocarbon detection. However, mode mixing that occurs during the sifting process of EMD causes the 'true' intrinsic mode function (IMF) to be extracted incorrectly and blurs the physical meaning of the IMF. We address the issue of how the mode mixing influences the EMD-based methods for hydrocarbon detection by introducing mode-mixing elimination methods, specifically ensemble EMD (EEMD) and complete ensemble EMD (CEEMD)-based highlight volumes, as feasible tools that can identify the peak amplitude above average volume and the peak frequency volume. Three schemes, that is, using all IMFs, selected IMFs or weighted IMFs, are employed in the EMD-, EEMD- and CEEMD-based highlight volume methods. When these methods were applied to seismic data from a tight sandstone gas field in Central Sichuan, China, the results demonstrated that the amplitude anomaly in the peak amplitude above average volume captured by EMD, EEMD and CEEMD combined with Hilbert transforms, whether using all IMFs, selected IMFs or weighted IMFs, are almost identical to each other. However, clear distinctions can be found in the peak frequency volume when comparing results generated using all IMFs, selected IMFs, or weighted IMFs. If all IMFs are used, the influence of mode mixing on the peak frequency volume is not readily discernable. However, using selected IMFs or a weighted IMFs' scheme affects the peak frequency in relation to the reservoir thickness in the EMD-based method. Significant improvement in the peak frequency volume can be achieved in EEMD-based highlight volumes using selected IMFs. However, if the weighted IMFs' scheme is adopted (i.e., if the undesired IMFs are included with reduced weights rather than excluded from the analysis entirely), the CEEMD-based peak frequency volume provides a more accurate reservoir thickness estimate compared with the other two methods. This
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
Linguistic Weighted Aggregation under Confidence Levels
Directory of Open Access Journals (Sweden)
Chonghui Zhang
2015-01-01
Full Text Available We develop some new linguistic aggregation operators based on confidence levels. Firstly, we introduce the confidence linguistic weighted averaging (CLWA operator and the confidence linguistic ordered weighted averaging (CLOWA operator. These two new linguistic aggregation operators are able to consider the confidence level of the aggregated arguments provided by the information providers. We also study some of their properties. Then, based on the generalized means, we introduce the confidence generalized linguistic ordered weighted averaging (CGLOWA operator. The main advantage of the CGLOWA operator is that it includes a wide range of special cases such as the CLOWA operator, the confidence linguistic ordered weighted quadratic averaging (CLOWQA operator, and the confidence linguistic ordered weighted geometric (CLOWG operator. Finally, we develop an application of the new approach in a multicriteria decision-making under linguistic environment and illustrate it with a numerical example.
McConnel, Craig S; McNeil, Ashleigh A; Hadrich, Joleen C; Lombard, Jason E; Garry, Franklyn B; Heller, Jane
2017-08-01
Over the past 175 years, data related to human disease and death have progressed to a summary measure of population health, the Disability-Adjusted Life Year (DALY). As dairies have intensified there has been no equivalent measure of the impact of disease on the productive life and well-being of animals. The development of a disease-adjusted metric requires a consistent set of disability weights that reflect the relative severity of important diseases. The objective of this study was to use an international survey of dairy authorities to derive disability weights for primary disease categories recorded on dairies. National and international dairy health and management authorities were contacted through professional organizations, dairy industry publications and conferences, and industry contacts. Estimates of minimum, most likely, and maximum disability weights were derived for 12 common dairy cow diseases. Survey participants were asked to estimate the impact of each disease on overall health and milk production. Diseases were classified from 1 (minimal adverse effects) to 10 (death). The data was modelled using BetaPERT distributions to demonstrate the variation in these dynamic disease processes, and to identify the most likely aggregated disability weights for each disease classification. A single disability weight was assigned to each disease using the average of the combined medians for the minimum, most likely, and maximum severity scores. A total of 96 respondents provided estimates of disability weights. The final disability weight values resulted in the following order from least to most severe: retained placenta, diarrhea, ketosis, metritis, mastitis, milk fever, lame (hoof only), calving trauma, left displaced abomasum, pneumonia, musculoskeletal injury (leg, hip, back), and right displaced abomasum. The peaks of the probability density functions indicated that for certain disease states such as retained placenta there was a relatively narrow range of
Weighted OFDM for wireless multipath channels
DEFF Research Database (Denmark)
Prasad, Ramjee; Nikookar, H.
2000-01-01
In this paper the novel method of "weighted OFDM" is addressed. Different types of weighting factors (including Rectangular, Bartlett, Gaussian. Raised cosine, Half-sin and Shanon) are considered. The impact of weighting of OFDM on the peak-to-average power ratio (PAPR) is investigated by means...... of simulation and is compared for the above mentioned weighting factors. Results show that by weighting of the OFDM signal the PAPR reduces. Bit error performance of weighted multicarrier transmission over a multipath channel is also investigated. Results indicate that there is a trade off between PAPR...
Spreading of oil and the concept of average oil thickness
Energy Technology Data Exchange (ETDEWEB)
Goodman, R. [Innovative Ventures Ltd., Cochrane, AB (Canada); Quintero-Marmol, A.M. [Pemex E and P, Campeche (Mexico); Bannerman, K. [Radarsat International, Vancouver, BC (Canada); Stevenson, G. [Calgary Univ., AB (Canada)
2004-07-01
The area of on oil slick on water can be readily measured using simple techniques ranging from visual observations to satellite-based radar systems. However, it is necessary to know the volume of spilled oil in order to determine the environmental impacts and best response strategy. The volume of oil must be known to determine spill quantity, response effectiveness and weathering rates. The relationship between volume and area is the average thickness of the oil over the spill area. This paper presents the results of several experiments conducted in the Gulf of Mexico that determined if average thickness of the oil is a characteristic of a specific crude oil, independent of spill size. In order to calculate the amount of oil on water from the area of slick requires information on the oil thickness, the inhomogeneity of the oil thickness and the oil-to-water ratio in the slick if it is emulsified. Experimental data revealed that an oil slick stops spreading very quickly after the application of oil. After the equilibrium thickness has been established, the slick is very sensitive to disturbances on the water surface, such as wave action, which causes the oil circle to dissipate into several small irregular shapes. It was noted that the spill source and oceanographic conditions are both critical to the final shape of the spill. 31 refs., 2 tabs., 8 figs.
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
Institute of Scientific and Technical Information of China (English)
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
Stochastic averaging of quasi-Hamiltonian systems
Institute of Scientific and Technical Information of China (English)
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Optimal weights for measuring redshift space distortions in multitracer galaxy catalogues
Pearson, David W.; Samushia, Lado; Gagrani, Praful
2016-12-01
Since the volume accessible to galaxy surveys is fundamentally limited, it is extremely important to analyse available data in the most optimal fashion. One way of enhancing the cosmological information extracted from the clustering of galaxies is by weighting the galaxy field. The most widely used weighting schemes assign weights to galaxies based on the average local density in the region (FKP weights) and their bias with respect to the dark matter field (PVP weights). They are designed to minimize the fractional variance of the galaxy power-spectrum. We demonstrate that the currently used bias dependent weighting scheme can be further optimized for specific cosmological parameters. We develop a procedure for computing the optimal weights and test them against mock catalogues for which the values of all fitting parameters, as well as the input power-spectrum are known. We show that by applying these weights to the joint power-spectrum of emission line galaxies and luminous red galaxies from the Dark Energy Spectroscopic Instrument survey, the variance in the measured growth rate parameter can be reduced by as much as 36 per cent.
Optimal Weights For Measuring Redshift Space Distortions in Multi-tracer Galaxy Catalogues
Pearson, David W; Gagrani, Praful
2016-01-01
Since the volume accessible to galaxy surveys is fundamentally limited, it is extremely important to analyse available data in the most optimal fashion. One way of enhancing the cosmological information extracted from the clustering of galaxies is by weighting the galaxy field. The most widely used weighting schemes assign weights to galaxies based on the average local density in the region (FKP weights) and their bias with respect to the dark matter field (PVP weights). They are designed to minimize the fractional variance of the galaxy power-spectrum. We demonstrate that the currently used bias dependent weighting scheme can be further optimized for specific cosmological parameters. We develop a procedure for computing the optimal weights and test them against mock catalogues for which the values of all fitting parameters, as well as the input power-spectrum are known. We show that by applying these weights to the joint power-spectrum of Emission Line Galaxies and Luminous Red Galaxies from the Dark Energy ...
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average sampling theorems for shift invariant subspaces
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Analogue Divider by Averaging a Triangular Wave
Selvam, Krishnagiri Chinnathambi
2017-08-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Directory of Open Access Journals (Sweden)
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Mohan, Devinder; Gupta, Raj Kumar
2015-01-01
High yielding genotypes differing for high molecular weight glutenin subunits at Glu D1 locus in national wheat programme of India were examined for bread loaf volume, gluten and protein contents, gluten strength, gluten index and protein-gluten ratio. Number of superior bread quality genotypes in four agro-climatically diverse zones of Indian plains was comparable in both categories of wheat i.e., 5 + 10 and 2 + 12. There wasn’t any difference in average bread loaf volume and grain protein c...
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Institute of Scientific and Technical Information of China (English)
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
The Average Lower Connectivity of Graphs
Directory of Open Access Journals (Sweden)
Ersin Aslan
2014-01-01
Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Changing mortality and average cohort life expectancy
DEFF Research Database (Denmark)
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
Discrete Averaging Relations for Micro to Macro Transition
Liu, Chenchen; Reina, Celia
2016-05-01
The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.
Non-homogeneous fractal hierarchical weighted networks.
Dong, Yujuan; Dai, Meifeng; Ye, Dandan
2015-01-01
A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Vegetation Growth 1996 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2005 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Cosmic Inhomogeneities and the Average Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-01-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...
Multi-objective calibration of forecast ensembles using Bayesian model averaging
Vrugt, J.A.; Clark, M.P.; Diks, C.G.H.; Duan, Q.; Robinson, B.A.
2006-01-01
Bayesian Model Averaging (BMA) has recently been proposed as a method for statistical postprocessing of forecast ensembles from numerical weather prediction models. The BMA predictive probability density function (PDF) of any weather quantity of interest is a weighted average of PDFs centered on the
González-Benito, J; Castillo, E; Cruz-Caldito, J F
2015-07-28
Nanothermal-expansion of poly(ethylene-co-vinylacetate), EVA, and poly(methyl methacrylate), PMMA, in the form of films was measured to finally obtain linear coefficients of thermal expansion, CTEs. The simple deflection of a cantilever in an atomic force microscope, AFM, was used to monitor thermal expansions at the nanoscale. The influences of: (a) the structure of EVA in terms of its composition (vinylacetate content) and (b) the size of PMMA chains in terms of the molecular weight were studied. To carry out this, several polymer samples were used, EVA copolymers with different weight percents of the vinylacetate comonomer (12, 18, 25 and 40%) and PMMA polymers with different weight average molecular weights (33.9, 64.8, 75.600 and 360.0 kg mol(-1)). The dependencies of the vinyl acetate weight fraction of EVA and the molecular weight of PMMA on their corresponding CTEs were analyzed to finally explain them using new, intuitive and very simple models based on the rule of mixtures. In the case of EVA copolymers a simple equation considering the weighted contributions of each comonomer was enough to estimate the final CTE above the glass transition temperature. On the other hand, when the molecular weight dependence is considered the free volume concept was used as novelty. The expansion of PMMA, at least at the nanoscale, was well and easily described by the sum of the weighted contributions of the occupied and free volumes, respectively.
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
College Freshman Stress and Weight Change: Differences by Gender
Economos, Christina D.; Hildebrandt, M. Lise; Hyatt, Raymond R.
2008-01-01
Objectives: To examine how stress and health-related behaviors affect freshman weight change by gender. Methods: Three hundred ninety-six freshmen completed a 40-item health behavior survey and height and weight were collected at baseline and follow-up. Results: Average weight change was 5.04 lbs for males, 5.49 lbs for females. Weight gain was…
College Freshman Stress and Weight Change: Differences by Gender
Economos, Christina D.; Hildebrandt, M. Lise; Hyatt, Raymond R.
2008-01-01
Objectives: To examine how stress and health-related behaviors affect freshman weight change by gender. Methods: Three hundred ninety-six freshmen completed a 40-item health behavior survey and height and weight were collected at baseline and follow-up. Results: Average weight change was 5.04 lbs for males, 5.49 lbs for females. Weight gain was…
Enhancing Trust in the Smart Grid by Applying a Modified Exponentially Weighted Averages Algorithm
2012-06-01
11 HVDC High Voltage Direct Current . . . . . . . . . . . . . . . . . . . . . . . . 19 etc. et cetera...System Separation 6.3 Turbine Valve Control 6.3 Load & Generator Rejection 4.5 Stabilizers 4.5 HVDC Controls 3.6 Out-of-Step Relaying 2.7 Discrete
A robust Phase I exponentially weighted moving average chart for dispersion
Zwetsloot, I.M.; Schoonhoven, M.; Does, R.J.M.M.
2015-01-01
A Phase I estimator of the dispersion should be efficient under in-control data and robust against contaminations. Most estimation methods proposed in the literature are either efficient or robust against either sustained shifts or scattered disturbances. In this article, we propose a new estimation
Mixed exponentially weighted moving average-cumulative sum charts for process monitoring
Abbas, N.; Riaz, M.; Does, R.J.M.M.
2013-01-01
The control chart is a very popular tool of statistical process control. It is used to determine the existence of special cause variation to remove it so that the process may be brought in statistical control. Shewhart-type control charts are sensitive for large disturbances in the process, whereas
2012-02-14
... captured. To illustrate this point, some draw on the ``speeding ticket'' analogy, whereby a driver caught... issue address only certain types of comparisons in particular circumstances, such that a total... types of comparison methodologies that might be used to determine margins of dumping and antidumping...
A core-monitoring based methodology for predictions of graphite weight loss in AGR moderator bricks
Energy Technology Data Exchange (ETDEWEB)
McNally, K., E-mail: kevin.mcnally@hsl.gsi.gov.uk [Health and Safety Laboratory, Harpur Hill, Buxton, Derbyshire SK17 9JN (United Kingdom); Warren, N. [Health and Safety Laboratory, Harpur Hill, Buxton, Derbyshire SK17 9JN (United Kingdom); Fahad, M.; Hall, G.; Marsden, B.J. [Nuclear Graphite Research Group, School of MACE, University of Manchester, Manchester M13 9PL (United Kingdom)
2017-04-01
Highlights: • A statistically-based methodology for estimating graphite density is presented. • Graphite shrinkage is accounted for using a finite element model. • Differences in weight loss forecasts were found when compared to the existing model. - Abstract: Physically based models, resolved using the finite element (FE) method are often used to model changes in dimensions and the associated stress fields of graphite moderator bricks within a reactor. These models require inputs that describe the loading conditions (temperature, fluence and weight loss ‘field variables’), and coded relationships describing the behaviour of graphite under these conditions. The weight loss field variables are calculated using a reactor chemistry/physics code FEAT DIFFUSE. In this work the authors consider an alternative data source of weight loss: that from a longitudinal dataset of density measurements made on small samples trepanned from operating reactors during statutory outages. A nonlinear mixed-effect model is presented for modelling the age and depth-related trends in density. A correction that accounts for irradiation-induced dimensional changes (axial and radial shrinkage) is subsequently applied. The authors compare weight loss forecasts made using FEAT DIFFUSE with those based on an alternative statistical model for a layer four moderator brick for the Hinkley Point B, Reactor 3. The authors compare the two approaches for the weight loss distribution through the brick with a particular focus on the interstitial keyway, and for the average (over the volume of the brick) weight loss.
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
Fastest Distributed Consensus Averaging Problem on Perfect and Complete n-ary Tree networks
Jafarizadeh, Saber
2010-01-01
Solving fastest distributed consensus averaging problem (i.e., finding weights on the edges to minimize the second-largest eigenvalue modulus of the weight matrix) over networks with different topologies is one of the primary areas of research in the field of sensor networks and one of the well known networks in this issue is tree network. Here in this work we present analytical solution for the problem of fastest distributed consensus averaging algorithm by means of stratification and semidefinite programming, for two particular types of tree networks, namely perfect and complete n-ary tree networks. Our method in this paper is based on convexity of fastest distributed consensus averaging problem, and inductive comparing of the characteristic polynomials initiated by slackness conditions in order to find the optimal weights. Also the optimal weights for the edges of certain types of branches such as perfect and complete n-ary tree branches are determined independently of rest of the network.
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Books average previous decade of economic misery.
Directory of Open Access Journals (Sweden)
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Energy Technology Data Exchange (ETDEWEB)
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Lent, Michelle R; Napolitano, Melissa A; Wood, G Craig; Argyropoulos, George; Gerhard, Glenn S; Hayes, Sharon; Foster, Gary D; Collins, Charlotte A; Still, Christopher D
2014-12-01
In this study, we examined the relationship between pre-operative internalized weight bias and 12-month post-operative weight loss in adult bariatric surgery patients. Bariatric surgery patients (n=170) from one urban and one rural medical center completed an internalized weight bias measure (the weight bias internalization scale, WBIS) and a depression survey (Beck depression inventory-II, BDI-II) before surgery, and provided consent to access their medical records. Participants (BMI=47.8 kg/m2, age=45.7 years) were mostly female (82.0 %), White (89.5 %), and underwent gastric bypass (83.6 %). The average WBIS score by item was 4.54 ± 1.3. Higher pre-operative WBIS scores were associated with diminished weight loss at 12 months after surgery (p=0.035). Pre-operative WBIS scores were positively associated with depressive symptoms (p<0.001). Greater internalized weight bias was associated with more depressive symptoms before surgery and less weight loss 1 year after surgery.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
Energy Technology Data Exchange (ETDEWEB)
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Large-volume liposuction and prevention of type 2 diabetes: a preliminary report.
Narsete, Thomas; Narsete, Michele; Buckspan, Randy; Ersek, Robert
2012-04-01
This report presents a preliminary study investigating the effects of large-volume liposuction on the parameters that determine type 2 diabetes. The study enrolled 31 patients with a body mass index (BMI) exceeding 30 kg/m(2) over a 1-year period. All the liposuction procedures were performed with the patient under local anesthesia using ketamine/valium sedation. Pre- and postoperative blood pressure, fasting glucose, glycosylated hemoglobin (HbA1C), weight, and BMI were evaluated for 16 of the 30 patients who returned for a follow-up visit 3 to 12 months postoperatively. The average aspirate was 8,455 ml without dermolipectomy and 5,795 ml with dermolipectomy. The data reveal a trend of improvement in blood sugar levels associated with weight loss that helps the patients. The average blood sugar level dropped 18% in our return patients, and the average weight loss was 9.2%. The average drop in BMI was 6.2%, and HbA1C showed a decrease of 2.3%. The patients with the best weight loss had the best reduction in blood sugar level and blood pressure. No transfers to the hospital and no thromboebolism occurred for any of the 31 patients. One dehiscence, two wound infections, and three seromas were reported. The authors hypothesize that large-volume liposuction in their series may have motivated some to diet, which could be explored in a larger series with control groups. Liposuction alone did not improve obesity but helped to motivate some of the patients to lose weight. These patients had the best results.
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Quantum Averaging of Squeezed States of Light
DEFF Research Database (Denmark)
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
High average-power induction linacs
Energy Technology Data Exchange (ETDEWEB)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...
Full averaging of fuzzy impulsive differential inclusions
Directory of Open Access Journals (Sweden)
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
Materials for high average power lasers
Energy Technology Data Exchange (ETDEWEB)
Marion, J.E.; Pertica, A.J.
1989-01-01
Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.
A dynamic analysis of moving average rules
C. Chiarella; X.Z. He; C.H. Hommes
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use
New Nordic diet versus average Danish diet
DEFF Research Database (Denmark)
Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco
2016-01-01
A previous study has shown effects of the New Nordic Diet (NND) to stimulate weight loss and lower systolic and diastolic blood pressure in obese Danish women and men in a randomized, controlled dietary intervention study. This work demonstrates long-term metabolic effects of the NND as compared...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...... diets high in fish, vegetables, fruit, and wholegrain facilitated weight loss and improved insulin sensitivity by increasing ketosis and gluconeogenesis in the fasting state....
A Predictive Likelihood Approach to Bayesian Averaging
Directory of Open Access Journals (Sweden)
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
The Effect of Sunspot Weighting
Svalgaard, Leif; Cortesi, Sergio
2015-01-01
Waldmeier in 1947 introduced a weighting (on a scale from 1 to 5) of the sunspot count made at Zurich and its auxiliary station Locarno, whereby larger spots were counted more than once. This counting method inflates the relative sunspot number over that which corresponds to the scale set by Wolfer and Brunner. Svalgaard re-counted some 60,000 sunspots on drawings from the reference station Locarno and determined that the number of sunspots reported were 'over counted' by 44% on average, leading to an inflation (measured by a weight factor) in excess of 1.2 for high solar activity. In a double-blind parallel counting by the Locarno observer Cagnotti, we determined that Svalgaard's count closely matches that of Cagnotti's, allowing us to determine the daily weight factor since 2003 (and sporadically before). We find that a simple empirical equation fits the observed weight factors well, and use that fit to estimate the weight factor for each month back to the introduction of weighting in 1947 and thus to be ab...
Verhoef, Sanne P M; Camps, Stefan G J A; Gonnissen, Hanne K J; Westerterp, Klaas R; Westerterp-Plantenga, Margriet S
2013-07-01
An inverse relation between sleep duration and body mass index (BMI) has been shown. We assessed the relation between changes in sleep duration and changes in body weight and body composition during weight loss. A total of 98 healthy subjects (25 men), aged 20-50 y and with BMI (in kg/m(2)) from 28 to 35, followed a 2-mo very-low-energy diet that was followed by a 10-mo period of weight maintenance. Body weight, body composition (measured by using deuterium dilution and air-displacement plethysmography), eating behavior (measured by using a 3-factor eating questionnaire), physical activity (measured by using the validated Baecke's questionnaire), and sleep (estimated by using a questionnaire with the Epworth Sleepiness Scale) were assessed before and immediately after weight loss and 3- and 10-mo follow-ups. The average weight loss was 10% after 2 mo of dieting and 9% and 6% after 3- and 10-mo follow-ups, respectively. Daytime sleepiness and time to fall asleep decreased during weight loss. Short (≤7 h) and average (>7 to weight loss. This change in sleep duration was concomitantly negatively correlated with the change in BMI during weight loss and after the 3-mo follow-up and with the change in fat mass after the 3-mo follow-up. Sleep duration benefits from weight loss or vice versa. Successful weight loss, loss of body fat, and 3-mo weight maintenance in short and average sleepers are underscored by an increase in sleep duration or vice versa. This trial was registered at clinicaltrials.gov as NCT01015508.
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
An evaluation of prior influence on the predictive ability of Bayesian model averaging.
St-Louis, Véronique; Clayton, Murray K; Pidgeon, Anna M; Radeloff, Volker C
2012-03-01
Model averaging is gaining popularity among ecologists for making inference and predictions. Methods for combining models include Bayesian model averaging (BMA) and Akaike's Information Criterion (AIC) model averaging. BMA can be implemented with different prior model weights, including the Kullback-Leibler prior associated with AIC model averaging, but it is unclear how the prior model weight affects model results in a predictive context. Here, we implemented BMA using the Bayesian Information Criterion (BIC) approximation to Bayes factors for building predictive models of bird abundance and occurrence in the Chihuahuan Desert of New Mexico. We examined how model predictive ability differed across four prior model weights, and how averaged coefficient estimates, standard errors and coefficients' posterior probabilities varied for 16 bird species. We also compared the predictive ability of BMA models to a best single-model approach. Overall, Occam's prior of parsimony provided the best predictive models. In general, the Kullback-Leibler prior, however, favored complex models of lower predictive ability. BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. For 6 other species, the choice of the prior model weight affected whether BMA was better than the best single-model approach. Our results demonstrate that parsimonious priors may be favorable over priors that favor complexity for making predictions. The approach we present has direct applications in ecology for better predicting patterns of species' abundance and occurrence.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Time-average dynamic speckle interferometry
Vladimirov, A. P.
2014-05-01
For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Endogenous average cost based access pricing
Fjell, Kenneth; Foros, Øystein; Pal, Debashis
2006-01-01
We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...
The Ghirlanda-Guerra identities without averaging
Chatterjee, Sourav
2009-01-01
The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Average Light Intensity Inside a Photobioreactor
Directory of Open Access Journals (Sweden)
Herby Jean
2011-01-01
Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.
... weight) weight loss. As in the treatment with hyperthyroidism, treatment of the abnormal state of hypothyroidism with thyroid ... Goiter Graves’ Disease Graves’ Eye Disease Hashimoto’s Thyroiditis Hyperthyroidism ... & Weight Thyroiditis Thyroid ...
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you ... caused by obesity. There are different types of weight loss surgery. They often limit the amount of food ...
Geomagnetic effects on the average surface temperature
Ballatore, P.
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
On Backus average for generally anisotropic layers
Bos, Len; Slawinski, Michael A; Stanoev, Theodore
2016-01-01
In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Spatial averaging infiltration model for layered soil
Institute of Scientific and Technical Information of China (English)
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
Spatial averaging infiltration model for layered soil
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.
Disk-averaged synthetic spectra of Mars
Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy
2004-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Exponential reduction of finite volume effects with twisted boundary conditions
Cherman, Aleksey; Wagman, Michael L; Yaffe, Laurence G
2016-01-01
Flavor-twisted boundary conditions can be used for exponential reduction of finite volume artifacts in flavor-averaged observables in lattice QCD calculations with $SU(N_f)$ light quark flavor symmetry. Finite volume artifact reduction arises from destructive interference effects in a manner closely related to the phase averaging which leads to large $N_c$ volume independence. With a particular choice of flavor-twisted boundary conditions, finite volume artifacts for flavor-singlet observables in a hypercubic spacetime volume are reduced to the size of finite volume artifacts in a spacetime volume with periodic boundary conditions that is four times larger.
Factors influencing weight gain after renal transplantation.
Johnson, C P; Gallagher-Lepak, S; Zhu, Y R; Porth, C; Kelber, S; Roza, A M; Adams, M B
1993-10-01
Weight gain following renal transplantation occurs frequently but has not been investigated quantitatively. A retrospective chart review of 115 adult renal transplant recipients was used to describe patterns of weight gain during the first 5 years after transplantation. Only 23 subjects (21%) were overweight before their transplant. Sixty-six subjects (57%) experienced a weight gain of greater than or equal to 10%, and 49 subjects (43%) were overweight according to Metropolitan relative weight criteria at 1 year after transplantation. There was an inverse correlation between advancing age and weight gain, with the youngest patients (18-29 years) having a 13.3% weight gain and the oldest patients (age greater than 50 years) having the lowest gain of 8.3% at 1 year (P = 0.047). Black recipients experienced a greater weight gain than whites during the first posttransplant year (14.6% vs. 9.0%; P = 0.043), and maintained or increased this difference over the 5-year period. Men and women experienced comparable weight gain during the first year (9.5% vs. 12.1%), but women continued to gain weight throughout the 5-year study (21.0% total weight gain). The men remained stable after the first year (10.8% total weight gain). Recipients who experienced at least a 10% weight gain also increased their serum cholesterol (mean 261 vs. 219) and triglyceride (mean 277 vs. 159) levels significantly, whereas those without weight gain did not. Weight gain did not correlate with cumulative steroid dose, donor source (living-related versus cadaver), rejection history, pre-existing obesity, the number of months on dialysis before transplantation, or posttransplant renal function. Posttransplant weight gain is related mainly to demographic factors, not to treatment factors associated with the transplant. The average weight gain during the first year after renal transplantation is approximately 10%. This increased weight, coupled with changes in lipid metabolism, may be significant in
Determinants of weight regain after bariatric surgery.
Bastos, Emanuelle Cristina Lins; Barbosa, Emília Maria Wanderley Gusmão; Soriano, Graziele Moreira Silva; dos Santos, Ewerton Amorim; Vasconcelos, Sandra Mary Lima
2013-01-01
Bariatric surgery leads to an average loss of 60-75% of excess body weight with maximum weight loss in the period between 18 and 24 months postoperatively. However, several studies show that weight is regained from two years of operation. To identify the determinants of weight regain in post-bariatric surgery users. Prospective cross-sectional study with 64 patients who underwent bariatric surgery with postoperative time > 2 years valued at significant weight regain. The variables analyzed were age, sex, education, socioeconomic status, work activity related to food, time after surgery, BMI, percentage of excess weight loss, weight gain, attendance monitoring nutrition, lifestyle, eating habits, self-perception of appetite, daily use of nutritional supplements and quality of life. There were 57 (89%) women and 7 (11%) men, aged 41.76 ± 7.93 years and mean postoperative period of 53.4 ± 18.4 months. The average weight and BMI were respectively 127.48 ± 24.2 kg and 49.56 ± 6.7 kg/m2 at surgery. The minimum weight and BMI were achieved 73.0 ± 18.6 kg and 28.3 ± 5.5 kg/m2, reached in 23.7 ± 12 months postoperatively. Regained significant weight occurred in 18 (28.1%) cases. The mean postoperative period of 66 ± 8.3 months and work activities related to food showed statistical significance (p=000 and p=0.003) for the regained weight. Bariatric surgery promotes adequate reduction of excess body weight, with significant weight regain observed after five years; post-operative time and work activity related to eating out as determining factors for the occurrence of weight regain.
Gover, A. Rod; Waldron, Andrew
2017-09-01
We develop a universal distributional calculus for regulated volumes of metrics that are suitably singular along hypersurfaces. When the hypersurface is a conformal infinity we give simple integrated distribution expressions for the divergences and anomaly of the regulated volume functional valid for any choice of regulator. For closed hypersurfaces or conformally compact geometries, methods from a previously developed boundary calculus for conformally compact manifolds can be applied to give explicit holographic formulæ for the divergences and anomaly expressed as hypersurface integrals over local quantities (the method also extends to non-closed hypersurfaces). The resulting anomaly does not depend on any particular choice of regulator, while the regulator dependence of the divergences is precisely captured by these formulæ. Conformal hypersurface invariants can be studied by demanding that the singular metric obey, smoothly and formally to a suitable order, a Yamabe type problem with boundary data along the conformal infinity. We prove that the volume anomaly for these singular Yamabe solutions is a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. Recently, Graham proved that the first variation of the volume anomaly recovers the density obstructing smooth solutions to this singular Yamabe problem; we give a new proof of this result employing our boundary calculus. Physical applications of our results include studies of quantum corrections to entanglement entropies.
A sixth order averaged vector field method
Li, Haochen; Wang, Yushun; Qin, Mengzhao
2014-01-01
In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Sparsity averaging for radio-interferometric imaging
Carrillo, Rafael E; Wiaux, Yves
2014-01-01
We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.
Fluctuations of wavefunctions about their classical average
Energy Technology Data Exchange (ETDEWEB)
Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)
2003-02-07
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Fluctuations of wavefunctions about their classical average
Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Grassmann Averages for Scalable Robust PCA
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...
Weight loss, weight regain and bone health.
Pines, Amos
2012-08-01
The ideal body image for women these days is being slim but, in the real world, obesity becomes a major health problem even in the developing countries. Overweight, but also underweight, may have associated adverse outcomes in many bodily systems, including the bone. Only a few studies have investigated the consequences of intentional weight loss, then weight regain, on bone metabolism and bone density. It seems that the negative impact of bone loss is not reversed when weight partially rebounds following the end of active intervention programs. Thus the benefits and risks of any weight loss program should be addressed individually, and monitoring of bone parameters is recommended.
Source of non-arrhenius average relaxation time in glass-forming liquids
DEFF Research Database (Denmark)
Dyre, Jeppe
1998-01-01
A major mystery of glass-forming liquids is the non-Arrhenius temperature-dependence of the average relaxation time. This paper briefly reviews the classical phenomenological models for non-Arrhenius behavior the free volume model and the entropy model and critiques against these models. We...... are anharmonic, the non-Arrhenius temperature-dependence of the average relaxation time is a consequence of the fact that the instantaneous shear modulus increases upon cooling....
Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis
Directory of Open Access Journals (Sweden)
LiMin Wang
2014-01-01
Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.
Weight Distributions of Multi-Edge type LDPC Codes
KASAI, Kenta; DECLERCQ, David; POULLIAT, Charly; SAKANIWA, Kohichi
2010-01-01
The multi-edge type LDPC codes, introduced by Richardson and Urbanke, present the general class of structured LDPC codes. In this paper, we derive the average weight distributions of the multi-edge type LDPC code ensembles. Furthermore, we investigate the asymptotic exponential growth rate of the average weight distributions and investigate the connection to the stability condition of the density evolution.
Detrending moving average algorithm for multifractals
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Energy Technology Data Exchange (ETDEWEB)
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
Local average height distribution of fluctuating interfaces
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Nuclear volume and prognosis in ovarian cancer
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.
1992-01-01
The prognostic value of the volume-weighted mean nuclear volume (MNV) was investigated retrospectively in 100 ovarian cancer patients with FIGO-stage IB-II (n = 51) and stage III-IV (n = 49) serous tumors. No association was demonstrated between the MNV and the survival or between the MNV and two...
Nuclear volume and prognosis in ovarian cancer
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.;
1992-01-01
The prognostic value of the volume-weighted mean nuclear volume (MNV) was investigated retrospectively in 100 ovarian cancer patients with FIGO-stage IB-II (n = 51) and stage III-IV (n = 49) serous tumors. No association was demonstrated between the MNV and the survival or between the MNV and two...
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. ... limiting calories) usually isn’t enough to cause weight loss. But exercise plays an important part in helping ...
Weight Changes following the Diagnosis of Type 2 Diabetes
DEFF Research Database (Denmark)
Olivarius, Niels de Fine; Siersma, Volkert Dirk; Køster-Rasmussen, Rasmus;
2015-01-01
Aims: The association between recent and more distant weight changes before and after the diagnosis of type 2 diabetes has been little researched. The aim of this study is to determine the influence of patients’ weight history before diabetes diagnosis on the observed 6-year weight changes after...... diagnosis. Methods: A clinical cohort study combined with self-reported past weight history. In total 885 patients aged ≥40 years and newly diagnosed with clinical type 2 diabetes were included. Body weight was measured immediately after diabetes diagnosis and again at the 6-year follow up examination...... weight change after diagnosis. Conclusions: During the first on average 5.7 years after diagnosis of type 2 diabetes, patients generally follow a course of declining average weight, and these weight developments are related primarily to recent weight changes, body mass index, and age, but not to the more...
Energy Technology Data Exchange (ETDEWEB)
Harris, Ardene, E-mail: ardene_b@yahoo.co [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.j [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Hao, Hong Yi, E-mail: haohongyi88@yahoo.co.j [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Kato, Fumi [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@me.co [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Onodera, Yuya, E-mail: yuyaonodera@med.hokudai.ac.j [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.j [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.j [Department of Radiology, Hokkaido University Hospital, North 15, West 7, Kita-ku, Sapporo 060-0815 (Japan)
2010-07-15
Objective: The present research was conducted to establish the normal splenic volume in adults using a novel and fast technique. The relationship between splenic volume and age, gender, and anthropometric parameters was also examined. Materials and methods: The splenic volume was measured in 230 consecutive patients who underwent computed tomography (CT) scans for various indications. Patients with conditions that have known effect on the spleen size were not included in this study. A new technique using volumetric software to automatically contour the spleen in each CT slice and quickly calculate splenic volume was employed. Inter- and intra-observer variability were also examined. Results: The average splenic volume of all the subjects was 127.4 {+-} 62.9 cm{sup 3}, ranging from 22 to 417 cm{sup 3}. The splenic volume (S) correlated with age (A) (r = -0.33, p < 0.0001), body weight (W) (r = 0.35, p < 0.0001), body mass index (r = 0.24, p < 0.0001) and body surface area (BSA) (r = 0.31, p < 0.0001). The age-adjusted splenic volume index correlated with gender (p = 0.0089). The formulae S = W[6.47A{sup (-0.31)}] and S = BSA[278A{sup (-0.36)}] were derived and can be used to estimate the splenic volume. Inter- and intra-observer variability were 6.4 {+-} 9.8% and 2.8 {+-} 3.5% respectively. Conclusion: Of the anthropometric parameters, the splenic volume was most closely linked to body weight. The automatically contouring software as well as formulae can be used to obtain the volume of the spleen in regular practice.
Averaged Null Energy Condition from Causality
Hartman, Thomas; Tajdini, Amirhossein
2016-01-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros G; Wainwright, Martin J
2007-01-01
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...
Bivariate phase-rectified signal averaging
Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg
2008-01-01
Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.
Averaged null energy condition and quantum inequalities in curved spacetime
Kontou, Eleni-Alexandra
2015-01-01
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a qu...
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W. N.; Cinnella, P.; Dwight, R. P.
2014-10-01
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Energy Technology Data Exchange (ETDEWEB)
Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands)
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Directory of Open Access Journals (Sweden)
Mauri Nieminen
1990-09-01
Full Text Available Estimation of live weight from measurements of body dimensions is useful in many management activities with domestic animals. In present study live weight was measured from 2932 female and 1037 male semi-domesticated reindeer (Rangifer tarandus tarandus L. during different seasons in 1969-85. The age of reindeer varied between 1 day and 14 yrs. Back length (along back from second spinous process to base of tail and chest girth (just behind front legs were taken also from 1490 female and 510 male reindeer. The growth of reindeer from birth to adulthood was cumulative consisting of a rapid weight accretion during summers followed by a weight loss or stasis during winters. The mathematical analyses of the growth based on exponential solutions gave average values for growth of female and male reindeer. Body weight of females increased until the age of 4.5 yrs and that of males until the age of 5.5 yrs. During winter and spring body weight of hinds decreased 10 to 15 kg and that of stags 30 to 50 kg in different age groups. Significant linear regressions were found between live weight and back length (r = 0.809 and 0.892, live weight and chest girth (r = 0.860 and 0.872 and live weight and combined body measure (back length + chest girth (r = 0.877 and 0.941 and live weight and body volume (r = 0.905 and 0.954, respectively in female and male reindeer. Exponential regressions gave, however, the best estimations of live weight with combined body measure.
Compositional dependences of average positron lifetime in binary As-S/Se glasses
Energy Technology Data Exchange (ETDEWEB)
Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)
2012-02-15
Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.
Spatial averaging-effects on turbulence measured by a continuous-wave coherent lidar
DEFF Research Database (Denmark)
Sjöholm, Mikael; Mikkelsen, Torben; Mann, Jakob;
2009-01-01
The influence of spatial volume averaging of a focused continuous-wave coherent Doppler lidar on observed wind turbulence in the atmospheric surface layer is described and analysed. For the first time, comparisons of lidar-measured turbulent spectra with spectra simultaneously obtained from a mast...
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You;
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...
A Generalized Induced Ordered Weighted Geometric Operator
Institute of Scientific and Technical Information of China (English)
ZeshuiXu; DaiWu
2004-01-01
Yager presented the Ordered Weighted Averaging (OWA) operator to provide a method for aggregating information of decision-making. Yager and Filev further presented the Induced Ordered Weighted Averaging (IOWA) operator. In this paper, we propose a Generalized Induced Ordered Weighted Geometric (GIOWG) operator and establish a simple objective-programming model to learn the associated weighting vector from observational data. Each object processed by the GIOWG operator consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are then aggregated. The desirable properties, such as commutativity, idempotency and monotonicity, etc., associated wlth the GIOWG operator are studied in detail, and some numerical examples are given to show the practicality and effectiveness of the developed operator.
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
The Gender Weight Gap: Sons, Daughters, and Maternal Weight
Pham-Kanter, Genevieve
2010-01-01
Although the effect of parents on their children has been the focus of much research on health and families, the influence of children on their parents has not been well studied. In this paper, I examine the effect of the sex composition of children on mothers' physical condition, as proxied by their weight. Using two independent datasets, I find that, many years after the birth of their children, women who have first-born daughters weigh on average 2-6 pounds less than women who have first-b...
Dietary protein, weight loss, and weight maintenance.
Westerterp-Plantenga, M S; Nieuwenhuizen, A; Tomé, D; Soenen, S; Westerterp, K R
2009-01-01
The role of dietary protein in weight loss and weight maintenance encompasses influences on crucial targets for body weight regulation, namely satiety, thermogenesis, energy efficiency, and body composition. Protein-induced satiety may be mainly due to oxidation of amino acids fed in excess, especially in diets with "incomplete" proteins. Protein-induced energy expenditure may be due to protein and urea synthesis and to gluconeogenesis; "complete" proteins having all essential amino acids show larger increases in energy expenditure than do lower-quality proteins. With respect to adverse effects, no protein-induced effects are observed on net bone balance or on calcium balance in young adults and elderly persons. Dietary protein even increases bone mineral mass and reduces incidence of osteoporotic fracture. During weight loss, nitrogen intake positively affects calcium balance and consequent preservation of bone mineral content. Sulphur-containing amino acids cause a blood pressure-raising effect by loss of nephron mass. Subjects with obesity, metabolic syndrome, and type 2 diabetes are particularly susceptible groups. This review provides an overview of how sustaining absolute protein intake affects metabolic targets for weight loss and weight maintenance during negative energy balance, i.e., sustaining satiety and energy expenditure and sparing fat-free mass, resulting in energy inefficiency. However, the long-term relationship between net protein synthesis and sparing fat-free mass remains to be elucidated.
The Average Errors for Hermite Interpolation on the 1-Fold Integrated Wiener Space
Institute of Scientific and Technical Information of China (English)
Guiqiao XU; Jingrui NING
2012-01-01
For the weighted approximation in Lp-norm,the authors determine the weakly asymptotic order for the p-average errors of the sequence of Hermite interpolation based on the Chebyshev nodes on the 1-fold integrated Wiener space.By this result,it is known that in the sense of information-based complexity,if permissible information functionals are Hermite data,then the p-average errors of this sequence are weakly equivalent to those of the corresponding sequence of the minimal p-average radius of nonadaptive information.
... be due to menstruation, heart or kidney failure, preeclampsia, or medicines you take. A rapid weight gain ... al. Position of the American Dietetic Association: weight management. J Am Diet Assoc . 2009;109:330-46. ...
... this page: //medlineplus.gov/ency/patientinstructions/000346.htm Weight-loss medicines To use the sharing features on this page, please enable JavaScript. Several weight-loss medicines are available. Ask your health care provider ...
Reduced central blood volume in cirrhosis
DEFF Research Database (Denmark)
Bendtsen, F; Henriksen, Jens Henrik Sahl; Sørensen, T I
1989-01-01
for measuring the central blood volume. We have developed a method that enables us to determine directly the central blood volume, i.e., the blood volume in the heart cavities, lungs, and central arterial tree. In 60 patients with cirrhosis and 16 control subjects the central blood volume was assessed according......The pathogenesis of ascites formation in cirrhosis is uncertain. It is still under debate whether the effective blood volume is reduced (underfilling theory) or whether the intravascular compartment is expanded (overflow theory). This problem has not yet been solved because of insufficient tools...... to the kinetic theory as the product of cardiac output and mean transit time of the central vascular bed. Central blood volume was significantly smaller in patients with cirrhosis than in controls (mean 21 vs. 27 ml/kg estimated ideal body weight, p less than 0.001; 25% vs. 33% of the total blood volume, p less...
An Improved Weighted Clustering Algorithm in MANET
Institute of Scientific and Technical Information of China (English)
WANG Jin; XU Li; ZHENG Bao-yu
2004-01-01
The original clustering algorithms in Mobile Ad hoc Network (MANET) are firstly analyzed in this paper.Based on which, an Improved Weighted Clustering Algorithm (IWCA) is proposed. Then, the principle and steps of our algorithm are explained in detail, and a comparison is made between the original algorithms and our improved method in the aspects of average cluster number, topology stability, clusterhead load balance and network lifetime. The experimental results show that our improved algorithm has the best performance on average.
Weight management in pregnancy
Olander, E. K.
2015-01-01
Key learning points: - Women who start pregnancy in an overweight or obese weight category have increased health risks - Irrespective of pre-pregnancy weight category, there are health risks associated with gaining too much weight in pregnancy for both mother and baby - There are currently no official weight gain guidelines for pregnancy in the UK, thus focus needs to be on supporting pregnant women to eat healthily and keep active
Orthopedic stretcher with average-sized person can pass through 18-inch opening
Lothschuetz, F. X.
1966-01-01
Modified Robinson stretcher for vertical lifting and carrying, will pass through an opening 18 inches in diameter, while containing a person of average height and weight. A subject 6 feet tall and weighing 200 pounds was lowered and raised out of an 18 inch diameter opening in a tank to test the stretcher.
Interpreting Sky-Averaged 21-cm Measurements
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
High-molecular-weight hemolysin of Clostridium tetani.
Mitsui, K; Mitsui, N; Kobashi, K; Hase, J
1982-01-01
Clostridium tetani excretes hemolysins of two size classes, a high-molecular-weight hemolysin (HMH), which was eluted near void volume of a Sepharose 6B column, and conventional tetanolysin (molecular weight, approximately 50,000). The total hemolysin activity in the culture supernatant increased sharply with growth of bacteria and remained at a high level during autolysis. The content of HMH, however, decreased from 41% at 4 h of culture to 0.4% at the early stage of autolysis. The cell bodies also exhibited hemolytic activity, 70% of which could be solubilized and separated into HMH and the 50,000 Mr tetanolysin as extracellular hemolysins. The activity ratio of HMH to the total solubilized hemolysins was 0.45, on the average, at 6 h of culture but was 0.23 at the middle of logarithmic growth. Partially purified HMH from both sources appeared as broken pieces of cytoplasmic membranes under an electron microscope. The ratio of proteins to phospholipids in HMH was found to 3.26, a value similar to that in cell membrane. The total cell hemolytic activity decreased by 90 or 75% upon addition of chloramphenicol or anti-tetanolysin serum, respectively, into a 6-h-old culture of bacteria. It is suggested that HMH is a complex of tetanolysin with a membrane fragment and releases the conventional tetanolysin during bacterial culture. Images PMID:7040245
Graczyk, Michelle B; Duarte Queirós, Sílvio M
2017-01-01
Employing Random Matrix Theory and Principal Component Analysis techniques, we enlarge our work on the individual and cross-sectional intraday statistical properties of trading volume in financial markets to the study of collective intraday features of that financial observable. Our data consist of the trading volume of the Dow Jones Industrial Average Index components spanning the years between 2003 and 2014. Computing the intraday time dependent correlation matrices and their spectrum of eigenvalues, we show there is a mode ruling the collective behaviour of the trading volume of these stocks whereas the remaining eigenvalues are within the bounds established by random matrix theory, except the second largest eigenvalue which is robustly above the upper bound limit at the opening and slightly above it during the morning-afternoon transition. Taking into account that for price fluctuations it was reported the existence of at least seven significant eigenvalues-and that its autocorrelation function is close to white noise for highly liquid stocks whereas for the trading volume it lasts significantly for more than 2 hours -, our finding goes against any expectation based on those features, even when we take into account the Epps effect. In addition, the weight of the trading volume collective mode is intraday dependent; its value increases as the trading session advances with its eigenversor approaching the uniform vector as well, which corresponds to a soar in the behavioural homogeneity. With respect to the nonstationarity of the collective features of the trading volume we observe that after the financial crisis of 2008 the coherence function shows the emergence of an upset profile with large fluctuations from that year on, a property that concurs with the modification of the average trading volume profile we noted in our previous individual analysis.
Estimates of nuclear volume in plaque and tumor-stage mycosis fungoides. A new prognostic indicator
DEFF Research Database (Denmark)
Brooks, B; Sørensen, Flemming Brandt; Thestrup-Pedersen, K
1994-01-01
It is well documented that mycosis fungoides (MF), a cutaneous T-cell lymphoma, has a variable clinical course. Unbiased stereological estimates of three-dimensional volume-weighted mean nuclear size (nucl vV) of mycosis cells were obtained in a retrospective study of 18 patients with a total of 34...... biopsies of cutaneous plaque and tumor-stage MF. The value of nucl vV in the first sampled biopsy, as well as the average and highest values, were determined in biopsies from each patient. The patients were divided into two groups, either above or below the group median. There was a strong positive...
Maintained intentional weight loss reduces cardiovascular outcomes
DEFF Research Database (Denmark)
Caterson, I D; Finer, N; Coutinho, W
2012-01-01
Aim: The Sibutramine Cardiovascular OUTcomes trial showed that sibutramine produced greater mean weight loss than placebo but increased cardiovascular morbidity but not mortality. The relationship between 12-month weight loss and subsequent cardiovascular outcomes is explored. Methods: Overweight/obese...... change to Month 12 was -4.18 kg (sibutramine) or -1.87 kg (placebo). Degree of weight loss during Lead-in Period or through Month 12 was associated with a progressive reduction in risk for the total population in primary outcome events and cardiovascular mortality over the 5-year assessment. Although...... more events occurred in the randomized sibutramine group, on an average, a modest weight loss of approximately 3 kg achieved in the Lead-in Period appeared to offset this increased event rate. Moderate weight loss (3-10 kg) reduced cardiovascular deaths in those with severe, moderate or mild...
The Effect of Sunspot Weighting
Svalgaard, Leif; Cagnotti, Marco; Cortesi, Sergio
2017-02-01
Although W. Brunner began to weight sunspot counts (from 1926), using a method whereby larger spots were counted more than once, he compensated for the weighting by not counting enough smaller spots in order to maintain the same reduction factor (0.6) as was used by his predecessor A. Wolfer to reduce the count to R. Wolf's original scale, so that the weighting did not have any effect on the scale of the sunspot number. In 1947, M. Waldmeier formalized the weighting (on a scale from 1 to 5) of the sunspot count made at Zurich and its auxiliary station Locarno. This explicit counting method, when followed, inflates the relative sunspot number over that which corresponds to the scale set by Wolfer (and matched by Brunner). Recounting some 60,000 sunspots on drawings from the reference station Locarno shows that the number of sunspots reported was "over counted" by {≈} 44 % on average, leading to an inflation (measured by an effective weight factor) in excess of 1.2 for high solar activity. In a double-blind parallel counting by the Locarno observer M. Cagnotti, we determined that Svalgaard's count closely matches that of Cagnotti, allowing us to determine from direct observation the daily weight factor for spots since 2003 (and sporadically before). The effective total inflation turns out to have two sources: a major one (15 - 18 %) caused by weighting of spots, and a minor source (4 - 5 %) caused by the introduction of the Zürich classification of sunspot groups which increases the group count by 7 - 8 % and the relative sunspot number by about half that. We find that a simple empirical equation (depending on the activity level) fits the observed factors well, and use that fit to estimate the weighting inflation factor for each month back to the introduction of effective inflation in 1947 and thus to be able to correct for the over-counts and to reduce sunspot counting to the Wolfer method in use from 1894 onwards.
Gover, A Rod
2016-01-01
For any conformally compact manifold with hypersurface boundary we define a canonical renormalized volume functional and compute an explicit, holographic formula for the corresponding anomaly. For the special case of asymptotically Einstein manifolds, our method recovers the known results. The anomaly does not depend on any particular choice of regulator, but the coefficients of divergences do. We give explicit formulae for these divergences valid for any choice of regulating hypersurface; these should be relevant to recent studies of quantum corrections to entanglement entropies. The anomaly is expressed as a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. We show that the variation of these energy functionals is exactly the obstruction to solving a singular Yamabe type problem with boundary data along the...
Pogson, Elise M; Delaney, Geoff P; Ahern, Verity; Boxer, Miriam M; Chan, Christine; David, Steven; Dimigen, Marion; Harvey, Jennifer A; Koh, Eng-Siew; Lim, Karen; Papadatos, George; Yap, Mei Ling; Batumalai, Vikneswary; Lazarus, Elizabeth; Dundas, Kylie; Shafiq, Jesmin; Liney, Gary; Moran, Catherine; Metcalfe, Peter; Holloway, Lois
2016-11-15
To determine whether T2-weighted MRI improves seroma cavity (SC) and whole breast (WB) interobserver conformity for radiation therapy purposes, compared with the gold standard of CT, both in the prone and supine positions. Eleven observers (2 radiologists and 9 radiation oncologists) delineated SC and WB clinical target volumes (CTVs) on T2-weighted MRI and CT supine and prone scans (4 scans per patient) for 33 patient datasets. Individual observer's volumes were compared using the Dice similarity coefficient, volume overlap index, center of mass shift, and Hausdorff distances. An average cavity visualization score was also determined. Imaging modality did not affect interobserver variation for WB CTVs. Prone WB CTVs were larger in volume and more conformal than supine CTVs (on both MRI and CT). Seroma cavity volumes were larger on CT than on MRI. Seroma cavity volumes proved to be comparable in interobserver conformity in both modalities (volume overlap index of 0.57 (95% Confidence Interval (CI) 0.54-0.60) for CT supine and 0.52 (95% CI 0.48-0.56) for MRI supine, 0.56 (95% CI 0.53-0.59) for CT prone and 0.55 (95% CI 0.51-0.59) for MRI prone); however, after registering modalities together the intermodality variation (Dice similarity coefficient of 0.41 (95% CI 0.36-0.46) for supine and 0.38 (0.34-0.42) for prone) was larger than the interobserver variability for SC, despite the location typically remaining constant. Magnetic resonance imaging interobserver variation was comparable to CT for the WB CTV and SC delineation, in both prone and supine positions. Although the cavity visualization score and interobserver concordance was not significantly higher for MRI than for CT, the SCs were smaller on MRI, potentially owing to clearer SC definition, especially on T2-weighted MR images. Copyright © 2016. Published by Elsevier Inc.
Bryan, Craig J; Bryan, AnnaBelle O; Hinkson, Kent; Bichrest, Michael; Ahern, D Aaron
2014-01-01
The current study examined relationships among self-reported depression severity, posttraumatic stress disorder (PTSD) symptom severity, and grade point average (GPA) among student servicemembers and veterans. We asked 422 student servicemembers and veterans (72% male, 86% Caucasian, mean age = 36.29 yr) to complete an anonymous online survey that assessed self-reported GPA, depression severity, PTSD severity, and frequency of academic problems (late assignments, low grades, failed exams, and skipped classes). Female respondents reported a slightly higher GPA than males (3.56 vs 3.41, respectively, p = 0.01). Depression symptoms (beta weight = -0.174, p = 0.03), male sex (beta weight = 0.160, p = 0.01), and younger age (beta weight = 0.155, p = 0.01) were associated with lower GPA but not PTSD symptoms (beta weight = -0.040, p = 0.62), although the interaction of depression and PTSD symptoms showed a nonsignificant inverse relationship with GPA (beta weight = -0.378, p = 0.08). More severe depression was associated with turning in assignments late (beta weight = 0.171, p = 0.03), failed exams (beta weight = 0.188, p = 0.02), and skipped classes (beta weight = 0.254, p = 0.01). The relationship of depression with self-reported GPA was mediated by frequency of failed examns. Results suggest that student servicemembers and veterans with greater emotional distress also report worse academic performance.
Jacques, Paul F; Wang, Huifen
2014-05-01
A large body of observational studies and randomized controlled trials (RCTs) has examined the role of dairy products in weight loss and maintenance of healthy weight. Yogurt is a dairy product that is generally very similar to milk, but it also has some unique properties that may enhance its possible role in weight maintenance. This review summarizes the human RCT and prospective observational evidence on the relation of yogurt consumption to the management and maintenance of body weight and composition. The RCT evidence is limited to 2 small, short-term, energy-restricted trials. They both showed greater weight losses with yogurt interventions, but the difference between the yogurt intervention and the control diet was only significant in one of these trials. There are 5 prospective observational studies that have examined the association between yogurt and weight gain. The results of these studies are equivocal. Two of these studies reported that individuals with higher yogurt consumption gained less weight over time. One of these same studies also considered changes in waist circumference (WC) and showed that higher yogurt consumption was associated with smaller increases in WC. A third study was inconclusive because of low statistical power. A fourth study observed no association between changes in yogurt intake and weight gain, but the results suggested that those with the largest increases in yogurt intake during the study also had the highest increase in WC. The final study examined weight and WC change separately by sex and baseline weight status and showed benefits for both weight and WC changes for higher yogurt consumption in overweight men, but it also found that higher yogurt consumption in normal-weight women was associated with a greater increase in weight over follow-up. Potential underlying mechanisms for the action of yogurt on weight are briefly discussed.
Volume and Weight Tables for Plantation - Grown Sycamore
Roger P. Belanger
1973-01-01
American sycamore (Platanus occidentalis L.) is well suited for short-rotation management. It can be regenerated easily, has produced excellent early growth on good sites, and lends itself to mechanized harvesting. Steinbeck et al.' concluded that spacings of 4 by 4 feet or more and rotation ages from 4 to 10 years hold considerable promise from...
Potential of high-average-power solid state lasers
Energy Technology Data Exchange (ETDEWEB)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-09-25
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.