Size estimates of nobel gas clusters by Rayleigh scattering experiments
Institute of Scientific and Technical Information of China (English)
Pinpin Zhu (朱频频); Guoquan Ni (倪国权); Zhizhan Xu (徐至展)
2003-01-01
Noble gases (argon, krypton, and xenon) are puffed into vacuum through a nozzle to produce clusters for studying laser-cluster interactions. Good estimates of the average size of the argon, krypton and xenon clusters are made by carrying out a series of Rayleigh scattering experiments. In the experiments, we have found that the scattered signal intensity varied greatly with the opening area of the pulsed valve. A new method is put forward to choose the appropriate scattered signal and measure the size of Kr cluster.
Directory of Open Access Journals (Sweden)
Khabaz Rahim
2015-01-01
Full Text Available Calibrations of neutron devices used in area monitoring are often performed by radionuclide neutron sources. Device readings increase due to neutrons scattered by the surroundings and the air. The influence of said scattering effects have been investigated in this paper by performing Monte Carlo simulations for ten different radionuclide neutron sources inside several sizes of concrete wall spherical rooms (Rsp = 200 to 1500 cm. In order to obtain the parameters that relate the additional contribution from scattered neutrons, calculations using a polynomial fit model were evaluated. Obtained results show that the contribution of scattering is roughly independent of the geometric shape of the calibration room. The parameter that relates the room-return scattering has been fitted in terms of the spherical room radius, so as to reasonably accurately estimate the scattering value for each radionuclide neutron source in any geometry of the calibration room.
International Nuclear Information System (INIS)
Clementi, Luis A.; Vega, Jorge R.; Gugliotta, Luis M.; Quirantes, Arturo
2012-01-01
A numerical method is proposed for the characterization of core–shell spherical particles from static light scattering (SLS) measurements. The method is able to estimate the core size distribution (CSD) and the particle size distribution (PSD), through the following two-step procedure: (i) the estimation of the bivariate core–particle size distribution (C–PSD), by solving a linear ill-conditioned inverse problem through a generalized Tikhonov regularization strategy, and (ii) the calculation of the CSD and the PSD from the estimated C–PSD. First, the method was evaluated on the basis of several simulated examples, with polystyrene–poly(methyl methacrylate) core–shell particles of different CSDs and PSDs. Then, two samples of hematite–Yttrium basic carbonate core–shell particles were successfully characterized. In all analyzed examples, acceptable estimates of the PSD and the average diameter of the CSD were obtained. Based on the single-scattering Mie theory, the proposed method is an effective tool for characterizing core–shell colloidal particles larger than their Rayleigh limits without requiring any a-priori assumption on the shapes of the size distributions. Under such conditions, the PSDs can always be adequately estimated, while acceptable CSD estimates are obtained when the core/shell particles exhibit either a high optical contrast, or a moderate optical contrast but with a high ‘average core diameter’/‘average particle diameter’ ratio. -- Highlights: ► Particles with core–shell morphology are characterized by static light scattering. ► Core size distribution and particle size distribution are successfully estimated. ► Simulated and experimental examples are used to validate the numerical method. ► The positive effect of a large core/shell optical contrast is investigated. ► No a-priori assumption on the shapes of the size distributions is required.
Guo, Qiang; Galushko, Volodymyr G.; Zalizovski, Andriy V.; Kashcheyev, Sergiy B.; Zheng, Yu
2018-05-01
A modification of the Doppler Interferometry Technique is suggested to enable estimating angles of arrival of comparatively broadband HF signals scattered by random irregularities of the ionospheric plasma with the use of small-size weakly directional antennas. The technique is based on the measurements of cross-spectra phases of the probe radiation recorded at least in three spatially separated points. The developed algorithm has been used to investigate the angular and frequency-time characteristics of HF signals propagating at frequencies above the maximum usable one (MUF) for the direct radio path Moscow-Kharkiv. The received signal spectra show presence of three families of spatial components attributed, respectively, to scattering by plasma irregularities near the middle point of the radio path, ground backscatter signals and scattering of the sounding signals by the intense plasma turbulence associated with auroral activations. It has been shown that the regions responsible for the formation of the third family components are located well inside the auroral oval. The drift velocity and direction of the auroral ionosphere plasma have been determined. The obtained estimates are consistent with the classical conception of the ionospheric plasma convection at high latitudes and do not contradict the results of investigations of the auroral ionosphere dynamics using the SuperDARN network.
Robust Optical Richness Estimation with Reduced Scatter
Energy Technology Data Exchange (ETDEWEB)
Rykoff, E.S.; /LBL, Berkeley; Koester, B.P.; /Chicago U. /Chicago U., KICP; Rozo, E.; /Chicago U. /Chicago U., KICP; Annis, J.; /Fermilab; Evrard, A.E.; /Michigan U. /Michigan U., MCTP; Hansen, S.M.; /Lick Observ.; Hao, J.; /Fermilab; Johnston, D.E.; /Fermilab; McKay, T.A.; /Michigan U. /Michigan U., MCTP; Wechsler, R.H.; /KIPAC, Menlo Park /SLAC
2012-06-07
Reducing the scatter between cluster mass and optical richness is a key goal for cluster cosmology from photometric catalogs. We consider various modifications to the red-sequence matched filter richness estimator of Rozo et al. (2009b), and evaluate their impact on the scatter in X-ray luminosity at fixed richness. Most significantly, we find that deeper luminosity cuts can reduce the recovered scatter, finding that {sigma}{sub ln L{sub X}|{lambda}} = 0.63 {+-} 0.02 for clusters with M{sub 500c} {approx}> 1.6 x 10{sup 14} h{sub 70}{sup -1} M{sub {circle_dot}}. The corresponding scatter in mass at fixed richness is {sigma}{sub ln M|{lambda}} {approx} 0.2-0.3 depending on the richness, comparable to that for total X-ray luminosity. We find that including blue galaxies in the richness estimate increases the scatter, as does weighting galaxies by their optical luminosity. We further demonstrate that our richness estimator is very robust. Specifically, the filter employed when estimating richness can be calibrated directly from the data, without requiring a-priori calibrations of the red-sequence. We also demonstrate that the recovered richness is robust to up to 50% uncertainties in the galaxy background, as well as to the choice of photometric filter employed, so long as the filters span the 4000 {angstrom} break of red-sequence galaxies. Consequently, our richness estimator can be used to compare richness estimates of different clusters, even if they do not share the same photometric data. Appendix A includes 'easy-bake' instructions for implementing our optimal richness estimator, and we are releasing an implementation of the code that works with SDSS data, as well as an augmented maxBCG catalog with the {lambda} richness measured for each cluster.
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
Quantitative determination of grain sizes by means of scattered ultrasound
International Nuclear Information System (INIS)
Goebbels, K.; Hoeller, P.
1976-01-01
The scattering of ultrasounds makes possible the quantitative determination of grain sizes in metallic materials. Examples of measurements on steels with grain sizes between ASTM 1 and ASTM 12 are given
Estimating software development project size, using probabilistic ...
African Journals Online (AJOL)
Estimating software development project size, using probabilistic techniques. ... of managing the size of software development projects by Purchasers (Clients) and Vendors (Development ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT
Eliminating high-order scattering effects in optical microbubble sizing.
Qiu, Huihe
2003-04-01
Measurements of bubble size and velocity in multiphase flows are important in much research and many industrial applications. It has been found that high-order refractions have great impact on microbubble sizing by use of phase-Doppler anemometry (PDA). The problem has been investigated, and a model of phase-size correlation, which also takes high-order refractions into consideration, is introduced to improve the accuracy of bubble sizing. Hence the model relaxes the assumption of a single-scattering mechanism in a conventional PDA system. The results of simulation based on this new model are compared with those based on a single-scattering-mechanism approach or a first-order approach. An optimization method for accurately sizing air bubbles in water has been suggested.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Estimating Search Engine Index Size Variability
DEFF Research Database (Denmark)
Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice
2016-01-01
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...
Cannaday, Ashley E.; Draham, Robert; Berger, Andrew J.
2016-04-01
The goal of this project is to estimate non-nuclear organelle size distributions in single cells by measuring angular scattering patterns and fitting them with Mie theory. Simulations have indicated that the large relative size distribution of organelles (mean:width≈2) leads to unstable Mie fits unless scattering is collected at polar angles less than 20 degrees. Our optical system has therefore been modified to collect angles down to 10 degrees. Initial validations will be performed on polystyrene bead populations whose size distributions resemble those of cell organelles. Unlike with the narrow bead distributions that are often used for calibration, we expect to see an order-of-magnitude improvement in the stability of the size estimates as the minimum angle decreases from 20 to 10 degrees. Scattering patterns will then be acquired and analyzed from single cells (EMT6 mouse cancer cells), both fixed and live, at multiple time points. Fixed cells, with no changes in organelle sizes over time, will be measured to determine the fluctuation level in estimated size distribution due to measurement imperfections alone. Subsequent measurements on live cells will determine whether there is a higher level of fluctuation that could be attributed to dynamic changes in organelle size. Studies on unperturbed cells are precursors to ones in which the effects of exogenous agents are monitored over time.
Using Diffraction Tomography to Estimate Marine Animal Size
Jaffe, J. S.; Roberts, P.
In this article we consider the development of acoustic methods which have the potential to size marine animals. The proposed technique uses scattered sound in order to invert for both animal size and shape. The technique uses the Distorted Wave Born Approximation (DWBA) in order to model sound scattered from these organisms. The use of the DWBA also provides a valuable context for formulating data analysis techniques in order to invert for parameters of the animal. Although 3-dimensional observations can be obtained from a complete set of views, due to the difficulty of collecting full 3-dimensional scatter, it is useful to simplify the inversion by approximating the animal by a few parameters. Here, the animals are modeled as 3-dimensional ellipsoids. This reduces the complexity of the problem to a determination of the 3 semi axes for the x, y and z dimensions from just a few radial spokes through the 3-dimensional Fourier Transform. In order to test the idea, simulated scatter data is taken from a 3-dimensional model of a marine animal and the resultant data are inverted in order to estimate animal shape
Characterization of micron-size hydrogen clusters using Mie scattering.
Jinno, S; Tanaka, H; Matsui, R; Kanasaki, M; Sakaki, H; Kando, M; Kondo, K; Sugiyama, A; Uesaka, M; Kishimoto, Y; Fukuda, Y
2017-08-07
Hydrogen clusters with diameters of a few micrometer range, composed of 10 8-10 hydrogen molecules, have been produced for the first time in an expansion of supercooled, high-pressure hydrogen gas into a vacuum through a conical nozzle connected to a cryogenic pulsed solenoid valve. The size distribution of the clusters has been evaluated by measuring the angular distribution of laser light scattered from the clusters. The data were analyzed based on the Mie scattering theory combined with the Tikhonov regularization method including the instrumental functions, the validity of which was assessed by performing a calibration study using a reference target consisting of standard micro-particles with two different sizes. The size distribution of the clusters was found discrete peaked at 0.33 ± 0.03, 0.65 ± 0.05, 0.81 ± 0.06, 1.40 ± 0.06 and 2.00 ± 0.13 µm in diameter. The highly reproducible and impurity-free nature of the micron-size hydrogen clusters can be a promising target for laser-driven multi-MeV proton sources with the currently available high power lasers.
Estimating Sample Size for Usability Testing
Directory of Open Access Journals (Sweden)
Alex Cazañas
2017-02-01
Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.
Estimation of scattered photons using a neural network in SPECT
International Nuclear Information System (INIS)
Hasegawa, Wataru; Ogawa, Koichi
1994-01-01
In single photon emission CT (SPECT), measured projection data involve scattered photons. This causes degradation of spatial resolution and contrast in reconstructed images. The purpose of this study is to estimate the scattered photons, and eliminate them from measured data. To estimate the scattered photons, we used an artificial neural network which consists of five input units, five hidden units, and two output units. The inputs of the network are the ratios of the counts acquired by five narrow energy windows and their sum. The outputs are the ratios of the count of scattered photons and that of primary photons to the total count. The neural network was trained with a back-propagation algorithm using count data obtained by a Monte Carlo simulation. The results of simulation showed improvement of contrast and spatial resolution in reconstructed images. (author)
Impaired hand size estimation in CRPS.
Peltz, Elena; Seifert, Frank; Lanz, Stefan; Müller, Rüdiger; Maihöfner, Christian
2011-10-01
A triad of clinical symptoms, ie, autonomic, motor and sensory dysfunctions, characterizes complex regional pain syndromes (CRPS). Sensory dysfunction comprises sensory loss or spontaneous and stimulus-evoked pain. Furthermore, a disturbance in the body schema may occur. In the present study, patients with CRPS of the upper extremity and healthy controls estimated their hand sizes on the basis of expanded or compressed schematic drawings of hands. In patients with CRPS we found an impairment in accurate hand size estimation; patients estimated their own CRPS-affected hand to be larger than it actually was when measured objectively. Moreover, overestimation correlated significantly with disease duration, neglect score, and increase of two-point-discrimination-thresholds (TPDT) compared to the unaffected hand and to control subjects' estimations. In line with previous functional imaging studies in CRPS patients demonstrating changes in central somatotopic maps, we suggest an involvement of the central nervous system in this disruption of the body schema. Potential cortical areas may be the primary somatosensory and posterior parietal cortices, which have been proposed to play a critical role in integrating visuospatial information. CRPS patients perceive their affected hand to be bigger than it is. The magnitude of this overestimation correlates with disease duration, decreased tactile thresholds, and neglect-score. Suggesting a disrupted body schema as the source of this impairment, our findings corroborate the current assumption of a CNS involvement in CRPS. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.
Zhou, Wen; Wang, Guifen; Li, Cai; Xu, Zhantang; Cao, Wenxi; Shen, Fang
2017-10-20
Phytoplankton cell size is an important property that affects diverse ecological and biogeochemical processes, and analysis of the absorption and scattering spectra of phytoplankton can provide important information about phytoplankton size. In this study, an inversion method for extracting quantitative phytoplankton cell size data from these spectra was developed. This inversion method requires two inputs: chlorophyll a specific absorption and scattering spectra of phytoplankton. The average equivalent-volume spherical diameter (ESD v ) was calculated as the single size approximation for the log-normal particle size distribution (PSD) of the algal suspension. The performance of this method for retrieving cell size was assessed using the datasets from cultures of 12 phytoplankton species. The estimations of a(λ) and b(λ) for the phytoplankton population using ESD v had mean error values of 5.8%-6.9% and 7.0%-10.6%, respectively, compared to the a(λ) and b(λ) for the phytoplankton populations using the log-normal PSD. The estimated values of C i ESD v were in good agreement with the measurements, with r 2 =0.88 and relative root mean square error (NRMSE)=25.3%, and relatively good performances were also found for the retrieval of ESD v with r 2 =0.78 and NRMSE=23.9%.
Estimation of myocardial infarct size by vectocardiography
International Nuclear Information System (INIS)
Takimiya, Akihiko
1987-01-01
Correlations between the vectorcardiogram (VCG) indice and infarct size (% defect) obtained from myocardial emission computed tomography with thallium-201 were studied in 45 patients with old infero-posterior myocardial infarction. The patients were divided into two groups, one consisting of eight patients who showed abnormal superior deviation of the QRS loop in a counterclockwise rotation beyond 30 msec in the frontal plane of VCG (referred to hereafter as CCW group), and another a non-CCW group consisting of 37 patients. The results obtained were as follows. (1) In the non-CCW group, there were significant negative correlations between the elevation and the Y-axial component of each instantaneous vector of the QRS loop at 30 msec, 35 msec, 40 msec, 45 msec, and between the Y-axial component of 50 msec instantaneous vector and the % defect. The correlation for both the elevation and the Y-axial component was closest at 40 msec, and there was most significantly close correlation between the elevation of 40 msec instantaneous vector and the % defect. (2) In the non-CCW group, there was also a significant correlation between the elevation of QRS area vector and the % defect. (3) In the CCW group, the infarct size could be estimated by the elevation of 30 msec instantaneous vector. An association with left anterior fascicular block was also indicated in the CCW group. (4) In infero-posterior myocardial infarction, the infarct size can be estimated using these VCG indices. (author)
Genome size estimation: a new methodology
Álvarez-Borrego, Josué; Gallardo-Escárate, Crisitian; Kober, Vitaly; López-Bonilla, Oscar
2007-03-01
Recently, within the cytogenetic analysis, the evolutionary relations implied in the content of nuclear DNA in plants and animals have received a great attention. The first detailed measurements of the nuclear DNA content were made in the early 40's, several years before Watson and Crick proposed the molecular structure of the DNA. In the following years Hewson Swift developed the concept of "C-value" in reference to the haploid phase of DNA in plants. Later Mirsky and Ris carried out the first systematic study of genomic size in animals, including representatives of the five super classes of vertebrates as well as of some invertebrates. From these preliminary results it became evident that the DNA content varies enormously between the species and that this variation does not bear relation to the intuitive notion from the complexity of the organism. Later, this observation was reaffirmed in the following years as the studies increased on genomic size, thus denominating to this characteristic of the organisms like the "Paradox of the C-value". Few years later along with the no-codification discovery of DNA the paradox was solved, nevertheless, numerous questions remain until nowadays unfinished, taking to denominate this type of studies like the "C-value enigma". In this study, we reported a new method for genome size estimation by quantification of fluorescence fading. We measured the fluorescence intensity each 1600 milliseconds in DAPI-stained nuclei. The estimation of the area under the graph (integral fading) during fading period was related with the genome size.
Estimation of scattering from a moist rough surface with spheroidal ...
Indian Academy of Sciences (India)
Administrator
less than 5⋅5% of the magnetic wavelength. We empha- size that the surface deviation is responsible for scattering at a given electromagnetic wavelength. 2. Theoretical consideration (basic theory). We consider a horizontally rough surface with slight per- centage of moisture (2–4⋅5%) with spheroidal dust parti- cles.
An estimate on the purely imaginary poles of scattering matrix
International Nuclear Information System (INIS)
Bozhkov, Y.D.
1988-12-01
In this work we obtain two estimates (upper and lower) on the number of purely imaginary poles of the scattering matrix for the wave equation in the exterior of a compact smooth obstacle in R n , n ≥ 3 odd. The method of Lax and Phillips is used. (author). 5 refs
Piatek, J. L.; Hapke, B. W.; Nelson, R. M.; Hale, A. S.; Smythe, W. D.
2003-01-01
The nature of the scattering of light is thought to be well understood when the medium is made up of independent scatterers that are much larger than the wavelength of that light. This is not the case when the size of the scattering objects is similar to or smaller than the wavelength or the scatterers are not independent. In an attempt to examine the applicability of independent particle scattering models, to planetary regoliths, a dataset of experimental results were compared with theoretical predictions.
Røising, Henrik Schou; Simon, Steven H.
2018-03-01
Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the center of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.
Yu, Haitao; Sun, Hui; Shen, Jianqi; Tropea, Cameron
2018-03-01
The primary rainbow observed when light is scattered by a spherical drop has been exploited in the past to measure drop size and relative refractive index. However, if higher spatial resolution is required in denser drop ensembles/sprays, and to avoid then multiple drops simultaneously appearing in the measurement volume, a highly focused beam is desirable, inevitably with a Gaussian intensity profile. The present study examines the primary rainbow pattern resulting when a Gaussian beam is scattered by a spherical drop and estimates the attainable accuracy when extracting size and refractive index. The scattering is computed using generalized Lorenz-Mie theory (GLMT) and Debye series decomposition of the Gaussian beam scattering. The results of these simulations show that the measurement accuracy is dependent on both the beam waist radius and the position of the drop in the beam waist.
Yurinskaya, Valentina; Aksenov, Nikolay; Moshkov, Alexey; Model, Michael; Goryachaya, Tatyana; Vereninov, Alexey
2017-10-01
A decrease in flow cytometric forward light scatter (FSC) is commonly interpreted as a sign of apoptotic cell volume decrease (AVD). However, the intensity of light scattering depends not only on the cell size but also on its other characteristics, such as hydration, which may affect the scattering in the opposite way. That makes estimation of AVD by FSC problematic. Here, we aimed to clarify the relationship between light scattering, cell hydration (assayed by buoyant density) and cell size by the Coulter technique. We used human lymphoid cells U937 exposed to staurosporine, etoposide or hypertonic stress as an apoptotic model. An initial increase in FSC was found to occur in apoptotic cells treated with staurosporine and hypertonic solutions; it is accompanied by cell dehydration and is absent in apoptosis caused by etoposide that is consistent with the lack of dehydration in this case. Thus, the effect of dehydration on the scattering signal outweighs the effect of reduction in cell size. The subsequent FSC decrease, which occurred in parallel to accumulation of annexin-positive cells, was similar in apoptosis caused by all three types of inducers. We conclude that an increase, but not a decrease in light scattering, indicates the initial cell volume decrease associated with apoptotic cell dehydration.
Second order statistics of bilinear forms of robust scatter estimators
Kammoun, Abla
2015-08-12
This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
International Nuclear Information System (INIS)
Grinev, V.G.; Kudinova, O.I.; Novokshonova, L.A.; Kuznetsov, S.P.; Udovenko, A.I.; Shelagin, A.V.
2006-01-01
Very cold neutrons (VCN) with the wavelength λ > 4.0 ran are convenient tool for investigating the super molecular structures of different nature. Using a Born approximation (BA) to the analysis of dependencies on the wavelength of the VCN scattering cross sections, it is possible to obtain information about average sizes (R) and concentrations of the scattering particles with R∼ λ. However, with an increasing the sizes of scatterers the conditions for BA applicability can be disrupted. In this work we investigated the possibilities of BA, eikonal and geometric-optical approximations for the analysis of VCN scattering on the spherical particles with R ≥ λ
Photometric estimation of defect size in radiation direction
International Nuclear Information System (INIS)
Zuev, V.M.
1993-01-01
Factors, affecting accuracy of photometric estimation of defect size in radiation transmission direction, are analyzed. Experimentally obtained dependences of contrast of defect image on its size in radiation transmission direction are presented. Practical recommendations on improving accuracy of photometric estimation of defect size in radiation transmission direction, are developed
Estimation of particle size distribution of nanoparticles from electrical ...
Indian Academy of Sciences (India)
2018-02-02
Feb 2, 2018 ... An indirect method of estimation of size distribution of nanoparticles in a nanocomposite is ... The present approach exploits DC electrical current–voltage ... the sizes of nanoparticles (NPs) by electrical characterization.
Better Size Estimation for Sparse Matrix Products
DEFF Research Database (Denmark)
Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus
2010-01-01
We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...
International Nuclear Information System (INIS)
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-01-01
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Source term estimation for small sized HTRs
International Nuclear Information System (INIS)
Moormann, R.
1992-08-01
Accidents which have to be considered are core heat-up, reactivity transients, water of air ingress and primary circuit depressurization. The main effort of this paper belongs to water/air ingress and depressurization, which requires consideration of fission product plateout under normal operation conditions; for the latter it is clearly shown, that absorption (penetration) mechanisms are much less important than assumed sometimes in the past. Source term estimation procedures for core heat-up events are shortly reviewed; reactivity transients are apparently covered by them. Besides a general literature survey including identification of areas with insufficient knowledge this paper contains some estimations on the thermomechanical behaviour of fission products in water in air ingress accidents. Typical source term examples are also presented. In an appendix, evaluations of the AVR experiments VAMPYR-I and -II with respect to plateout and fission product filter efficiency are outlined and used for a validation step of the new plateout code SPATRA. (orig.)
Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars
Martínez Ledesma, M.; Diaz, M. A.
2017-12-01
The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.
Velten, Andreas
2017-05-01
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems
International Nuclear Information System (INIS)
Fazli, Roohallah; Nakhkash, Mansor
2012-01-01
This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)
International Nuclear Information System (INIS)
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-08-01
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.
Muon energy estimate through multiple scattering with the MACRO detector
Energy Technology Data Exchange (ETDEWEB)
Ambrosio, M.; Antolini, R.; Auriemma, G.; Bakari, D.; Baldini, A.; Barbarino, G.C.; Barish, B.C.; Battistoni, G.; Becherini, Y.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bloise, C.; Bower, C.; Brigida, M.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Candela, A.; Carboni, M.; Caruso, R.; Cassese, F.; Cecchini, S.; Cei, F.; Chiarella, V.; Choudhary, B.C.; Coutu, S.; Cozzi, M.; De Cataldo, G.; De Deo, M.; Dekhissi, H.; De Marzo, C.; De Mitri, I.; Derkaoui, J.; De Vincenzi, M.; Di Credico, A.; Dincecco, M.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Giorgini, M.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Gustavino, C.; Habig, A.; Hanson, K.; Heinz, R.; Iarocci, E.; Katsavounidis, E.; Katsavounidis, I.; Kearns, E.; Kim, H.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D.S.; Lindozzi, M.; Lipari, P.; Longley, N.P.; Longo, M.J.; Loparco, F.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Margiotta, A.; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M.N.; Michael, D.G.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicolo, D.; Nolty, R.; Orth, C.; Osteria, G.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C.W.; Perrone, L.; Petrera, S.; Pistilli, P.; Popa, V.; Raino, A.; Reynoldson, J.; Ronga, F.; Rrhioua, A.; Satriano, C.; Scapparone, E. E-mail: eugenio.scapparone@bo.infn.it; Scholberg, K.; Sciubba, A.; Serra, P.; Sioli, M. E-mail: maximiliano.sioli@bo.infn.it; Sirri, G.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J.L.; Sulak, L.R.; Surdo, A.; Tarle, G.; Tatananni, E.; Togo, V.; Vakili, M.; Walter, C.W.; Webb, R
2002-10-21
Muon energy measurement represents an important issue for any experiment addressing neutrino-induced up-going muon studies. Since the neutrino oscillation probability depends on the neutrino energy, a measurement of the muon energy adds an important piece of information concerning the neutrino system. We show in this paper how the MACRO limited streamer tube system can be operated in drift mode by using the TDCs included in the QTPs, an electronics designed for magnetic monopole search. An improvement of the space resolution is obtained, through an analysis of the multiple scattering of muon tracks as they pass through our detector. This information can be used further to obtain an estimate of the energy of muons crossing the detector. Here we present the results of two dedicated tests, performed at CERN PS-T9 and SPS-X7 beam lines, to provide a full check of the electronics and to exploit the feasibility of such a multiple scattering analysis. We show that by using a neural network approach, we are able to reconstruct the muon energy for E{sub {mu}}<40 GeV. The test beam data provide an absolute energy calibration, which allows us to apply this method to MACRO data.
Muon energy estimate through multiple scattering with the MACRO detector
Ambrosio, M; Auriemma, G; Bakari, D; Baldini, A; Barbarino, G C; Barish, B C; Battistoni, G; Becherini, Y; Bellotti, R; Bemporad, C; Bernardini, P; Bilokon, H; Bloise, C; Bower, C; Brigida, M; Bussino, S; Cafagna, F; Calicchio, M; Campana, D; Candela, A; Carboni, M; Caruso, R; Cassese, F; Cecchini, S; Cei, F; Chiarella, V; Choudhary, B C; Coutu, S; Cozzi, M; De Cataldo, G; De Deo, M; Dekhissi, H; De Marzo, C; De Mitri, I; Derkaoui, J; De Vincenzi, M; Di Credico, A; Dincecco, M; Erriquez, O; Favuzzi, C; Forti, C; Fusco, P; Giacomelli, G; Giannini, G; Giglietto, N; Giorgini, M; Grassi, M; Gray, L; Grillo, A; Guarino, F; Gustavino, C; Habig, A; Hanson, K; Heinz, R; Iarocci, E; Katsavounidis, E; Katsavounidis, I; Kearns, E; Kim, H; Kyriazopoulou, S; Lamanna, E; Lane, C; Levin, D S; Lindozzi, M; Lipari, P; Longley, N P; Longo, M J; Loparco, F; Maaroufi, F; Mancarella, G; Mandrioli, G; Margiotta, A; Marini, A; Martello, D; Marzari-Chiesa, A; Mazziotta, M N; Michael, D G; Monacelli, P; Montaruli, T; Monteno, M; Mufson, S; Musser, J; Nicolò, D; Nolty, R; Orth, C; Osteria, G; Palamara, O; Patera, V; Patrizii, L; Pazzi, R; Peck, C W; Perrone, L; Petrera, S; Pistilli, P; Popa, V; Rainó, A; Reynoldson, J; Ronga, F; Rrhioua, A; Satriano, C; Scapparone, E; Scholberg, K; Sciubba, A; Serra, P; Sioli, M; Sirri, G; Sitta, M; Spinelli, P; Spinetti, M; Spurio, M; Steinberg, R; Stone, J L; Sulak, L R; Surdo, A; Tarle, G; Tatananni, E; Togo, V; Vakili, M; Walter, C W; Webb, R
2002-01-01
Muon energy measurement represents an important issue for any experiment addressing neutrino-induced up-going muon studies. Since the neutrino oscillation probability depends on the neutrino energy, a measurement of the muon energy adds an important piece of information concerning the neutrino system. We show in this paper how the MACRO limited streamer tube system can be operated in drift mode by using the TDCs included in the QTPs, an electronics designed for magnetic monopole search. An improvement of the space resolution is obtained, through an analysis of the multiple scattering of muon tracks as they pass through our detector. This information can be used further to obtain an estimate of the energy of muons crossing the detector. Here we present the results of two dedicated tests, performed at CERN PS-T9 and SPS-X7 beam lines, to provide a full check of the electronics and to exploit the feasibility of such a multiple scattering analysis. We show that by using a neural network approach, we are able to r...
Development of an ejecta particle size measurement diagnostic based on Mie scattering
Energy Technology Data Exchange (ETDEWEB)
Schauer, Martin Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Buttler, William Tillman [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Frayer, Daniel K. [National Security Tech, Inc., Los Alamos, NM (United States); Grover, Michael [National Security Technologies, Santa Barbara, CA (United States). Special Technologies Lab.; Monfared, Shabnam Kalighi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stevens, Gerald D. [National Security Technologies, Santa Barbara, CA (United States). Special Technologies Lab.; Stone, Benjamin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Turley, William Dale [National Security Technologies, Santa Barbara, CA (United States). Special Technologies Lab.
2017-09-27
The goal of this work is to determine the feasibility of extracting the size of particles ejected from shocked metal surfaces (ejecta) from the angular distribution of light scattered by a cloud of such particles. The basis of the technique is the Mie theory of scattering, and implicit in this approach are the assumptions that the scattering particles are spherical and that single scattering conditions prevail. The meaning of this latter assumption, as far as experimental conditions are concerned, will become clear later. The solution to Maxwell’s equations for spherical particles illuminated by a plane electromagnetic wave was derived by Gustav Mie more than 100 years ago, but several modern treatises discuss this solution in great detail. The solution is a complicated series expansion of the scattered electric field, as well as the field within the particle, from which the total scattering and absorption cross sections as well as the angular distribution of scattered intensity can be calculated numerically. The detailed nature of the scattering is determined by the complex index of refraction of the particle material as well as the particle size parameter, x, which is the product of the wavenumber of the incident light and the particle radius, i.e. x = 2rπ= λ. Figure 1 shows the angular distribution of scattered light for different particle size parameters and two orthogonal incident light polarizations as calculated using the Mie solution. It is obvious that the scattering pattern is strongly dependent on the particle size parameter, becoming more forward-directed and less polarizationdependent as the particle size parameter increases. This trend forms the basis for the diagnostic design.
Multiphase flow parameter estimation based on laser scattering
Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.
2015-07-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.
Multiphase flow parameter estimation based on laser scattering
International Nuclear Information System (INIS)
Vendruscolo, Tiago P; Fischer, Robert; Martelli, Cicero; Da Silva, Marco J; Rodrigues, Rômulo L P; Morales, Rigoberto E M
2015-01-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)
International Nuclear Information System (INIS)
Alger, T.W.
1979-01-01
A new method for determining the particle-size-distribution function of a polydispersion of spherical particles is presented. The inversion technique for the particle-size-distribution function is based upon matching the measured intensity profile of angularly scattered light with a summation of the intensity contributions of a series of appropriately spaced, narrowband, size-distribution functions. A numerical optimization technique is used to determine the strengths of the individual bands that yield the best agreement with the measured scattered-light-intensity profile. Because Mie theory is used, the method is applicable to spherical particles of all sizes. Several numerical examples demonstrate the application of this inversion method
How to Estimate and Interpret Various Effect Sizes
Vacha-Haase, Tammi; Thompson, Bruce
2004-01-01
The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a "defect" (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although…
Guided wave crack detection and size estimation in stiffened structures
Bhuiyan, Md Yeasin; Faisal Haider, Mohammad; Poddar, Banibrata; Giurgiutiu, Victor
2018-03-01
Structural health monitoring (SHM) and nondestructive evaluation (NDE) deals with the nondestructive inspection of defects, corrosion, leaks in engineering structures by using ultrasonic guided waves. In the past, simplistic structures were often considered for analyzing the guided wave interaction with the defects. In this study, we focused on more realistic and relatively complicated structure for detecting any defect by using a non-contact sensing approach. A plate with a stiffener was considered for analyzing the guided wave interactions. Piezoelectric wafer active transducers were used to produce excitation in the structures. The excitation generated the multimodal guided waves (aka Lamb waves) that propagate in the plate with stiffener. The presence of stiffener in the plate generated scattered waves. The direct wave and the additional scattered waves from the stiffener were experimentally recorded and studied. These waves were considered as a pristine case in this research. A fine horizontal semi-circular crack was manufactured by using electric discharge machining in the same stiffener. The presence of crack in the stiffener produces additional scattered waves as well as trapped waves. These scattered waves and trapped wave modes from the cracked stiffener were experimentally measured by using a scanning laser Doppler vibrometer (SLDV). These waves were analyzed and compared with that from the pristine case. The analyses suggested that both size and shape of the horizontal crack may be predicted from the pattern of the scattered waves. Different features (reflection, transmission, and mode-conversion) of the scattered wave signals are analyzed. We found direct transmission feature for incident A0 wave mode and modeconversion feature for incident S0 mode are most suitable for detecting the crack in the stiffener. The reflection feature may give a better idea of sizing the crack.
Effect size estimates: current use, calculations, and interpretation.
Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J
2012-02-01
The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
Aptowicz, K. B.; Pan, Y.; Martin, S.; Fernandez, E.; Chang, R.; Pinnick, R. G.
2013-12-01
We report upon an experimental approach that provides insight into how particle size and shape affect the scattering phase function of atmospheric aerosol particles. Central to our approach is the design of an apparatus that measures the forward and backward scattering hemispheres (scattering patterns) of individual atmospheric aerosol particles in the coarse mode range. The size and shape of each particle is discerned from the corresponding scattering pattern. In particular, autocorrelation analysis is used to differentiate between spherical and non-spherical particles, the calculated asphericity factor is used to characterize the morphology of non-spherical particles, and the integrated irradiance is used for particle sizing. We found the fraction of spherical particles decays exponentially with particle size, decreasing from 11% for particles on the order of 1 micrometer to less than 1% for particles over 5 micrometer. The average phase functions of subpopulations of particles, grouped by size and morphology, are determined by averaging their corresponding scattering patterns. The phase functions of spherical and non-spherical atmospheric particles are shown to diverge with increasing size. In addition, the phase function of non-spherical particles is found to vary little as a function of the asphericity factor.
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
Estimating spatio-temporal dynamics of size-structured populations
DEFF Research Database (Denmark)
Kristensen, Kasper; Thygesen, Uffe Høgsbro; Andersen, Ken Haste
2014-01-01
with simple stock dynamics, to estimate simultaneously how size distributions and spatial distributions develop in time. We demonstrate the method for a cod population sampled by trawl surveys. Particular attention is paid to correlation between size classes within each trawl haul due to clustering...... of individuals with similar size. The model estimates growth, mortality and reproduction, after which any aspect of size-structure, spatio-temporal population dynamics, as well as the sampling process can be probed. This is illustrated by two applications: 1) tracking the spatial movements of a single cohort...
Beam size effects in the radiative Bhabha scattering
International Nuclear Information System (INIS)
Szczekowski, M.
1990-01-01
In some electromagnetic processes the measured cross section can be substantially smaller than calculated in standard Quantum Electrodynamics. The process of single bremsstrahlung, e + e - → e + e - γ is an example of such effect. If the size of the effect for large angle γ radiation is similar to its magnitude at low angles, then standard calculations of the radiative Bahbha background to e.g. the reaction used in counting the number of neutrino generations, e + e - → νν-barγ, at LEP energies can be overestimated by 10-20%. 5 refs., 5 figs. (author)
Food photographs in portion size estimation among adolescent Mozambican girls.
Korkalo, Liisa; Erkkola, Maijaliisa; Fidalgo, Lourdes; Nevalainen, Jaakko; Mutanen, Marja
2013-09-01
To assess the validity of food photographs in portion size estimation among adolescent girls in Mozambique. The study was carried out in preparation for the larger ZANE study, which used the 24 h dietary recall method. Life-sized photographs of three portion sizes of two staple foods and three sauces were produced. Participants ate weighed portions of one staple food and one sauce. After the meal, they were asked to estimate the amount of food with the aid of the food photographs. Zambezia Province, Mozambique. Ninety-nine girls aged 13–18 years. The mean differences between estimated and actual portion sizes relative to the actual portion size ranged from 219% to 8% for different foods. The respective mean difference for all foods combined was 25% (95% CI 212, 2 %). Especially larger portions of the staple foods were often underestimated. For the staple foods, between 62% and 64% of the participants were classified into the same thirds of the distribution of estimated and actual food consumption and for sauces, the percentages ranged from 38% to 63%. Bland–Altman plots showed wide limits of agreement. Using life-sized food photographs among adolescent Mozambican girls resulted in a rather large variation in the accuracy of individuals’ estimates. The ability to rank individuals according to their consumption was, however, satisfactory for most foods. There seems to be a need to further develop and test food photographs used in different populations in Sub-Saharan Africa to improve the accuracy of portion size estimates.
Small angle neutron scattering measurements of magnetic cluster sizes in magnetic recorging disks
Toney, M
2003-01-01
We describe Small Angle Neutron Scattering measurements of the magnetic cluster size distributions for several longitudinal magnetic recording media. We find that the average magnetic cluster size is slightly larger than the average physical grain size, that there is a broad distribution of cluster sizes, and that the cluster size is inversely correlated to the media signal-to-noise ratio. These results show that intergranular magnetic coupling in these media is small and they provide empirical data for the cluster-size distribution that can be incorporated into models of magnetic recording.
Development of multiple scattering lidar to retrieve cloud extinction and size information
International Nuclear Information System (INIS)
Kim, Dukhyeon; Cheong, Hai Du; Kim, Young Gi; Park, Sun Ho
2008-01-01
Traditional Mie scattering cloud lidar have some limitations because of multiple scattering effects. Because this multiple scattering effects induce depolarization of spherical particle and enhancement of extinction coefficient. We cannot measure the phase of water with depolarization lidar, and also cannot measure the extinction coefficient with single FOV(Field Of View)Mie cloud lidar system. In the study, we have developed a multiple field of view Mie cloud liar system which can give many information about the cloud droplet such as cloud effective size, cloud number density, extinction coefficient of cloud, and phase of water through the correction of multiple scattering effects. For this purpose, we have developed a multiple field of view lidar system which composed of 32 different pinholes. Figure 1 shows the schematic diagram and picture of pinholes which start from 100μm to 8mm. Pihole is located at the focal plane of the parabolic mirror, in this case the minimum FOV is 67μrad, maximum FOV is 5.3 mrad. Figure 2 shows Monte Carlo simulation of the multiple scattering photons vs. cloud depth. In this calculation we assumed that wavelength normalized aerosol size(x)is 100, and density of cloud (extinction efficiency)is 0.01m"-1". By measuring FOV dependent signals and aerosol extinction coefficient we can extract effective droplet size through following equations. Here θ"d"is aerosol effective size, and z"j", f, Θ(z)are height, aerosol density dependent function, and angular size of lidar signal at the height z. Finally. f(z)depends on the light mean free path and number of scattering
Sizing of single evaporating droplet with Near-Forward Elastic Scattering Spectroscopy
Woźniak, M.; Jakubczyk, D.; Derkachov, G.; Archer, J.
2017-11-01
We have developed an optical setup and related numerical models to study evolution of single evaporating micro-droplets by analysis of their spectral properties. Our approach combines the advantages of the electrodynamic trapping with the broadband spectral analysis with the supercontinuum laser illumination. The elastically scattered light within the spectral range of 500-900 nm is observed by a spectrometer placed at the near-forward scattering angles between 4.3 ° and 16.2 ° and compared with the numerically generated lookup table of the broadband Mie scattering. Our solution has been successfully applied to infer the size evolution of the evaporating droplets of pure liquids (diethylene and ethylene glycol) and suspensions of nanoparticles (silica and gold nanoparticles in diethylene glycol), with maximal accuracy of ± 25 nm. The obtained results have been compared with the previously developed sizing techniques: (i) based on the analysis of the Mie scattering images - the Mie Scattering Lookup Table Method and (ii) the droplet weighting. Our approach provides possibility to handle levitating objects with much larger size range (radius from 0.5 μm to 30 μm) than with the use of optical tweezers (typically radius below 8 μm) and analyse them with much wider spectral range than with commonly used LED sources.
Optimization-based scatter estimation using primary modulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)
2016-08-15
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.
Directory of Open Access Journals (Sweden)
Lee Chia-Wei
2010-12-01
Full Text Available Abstract Background Understanding the endocytosis process of gold nanoparticles (AuNPs is important for the drug delivery and photodynamic therapy applications. The endocytosis in living cells is usually studied by fluorescent microscopy. The fluorescent labeling suffers from photobleaching. Besides, quantitative estimation of the cellular uptake is not easy. In this paper, the size-dependent endocytosis of AuNPs was investigated by using plasmonic scattering images without any labeling. Results The scattering images of AuNPs and the vesicles were mapped by using an optical sectioning microscopy with dark-field illumination. AuNPs have large optical scatterings at 550-600 nm wavelengths due to localized surface plasmon resonances. Using an enhanced contrast between yellow and blue CCD images, AuNPs can be well distinguished from cellular organelles. The tracking of AuNPs coated with aptamers for surface mucin glycoprotein shows that AuNPs attached to extracellular matrix and moved towards center of the cell. Most 75-nm-AuNPs moved to the top of cells, while many 45-nm-AuNPs entered cells through endocytosis and accumulated in endocytic vesicles. The amounts of cellular uptake decreased with the increase of particle size. Conclusions We quantitatively studied the endocytosis of AuNPs with different sizes in various cancer cells. The plasmonic scattering images confirm the size-dependent endocytosis of AuNPs. The 45-nm-AuNP is better for drug delivery due to its higher uptake rate. On the other hand, large AuNPs are immobilized on the cell membrane. They can be used to reconstruct the cell morphology.
Particle size distribution models of small angle neutron scattering pattern on ferro fluids
International Nuclear Information System (INIS)
Sistin Asri Ani; Darminto; Edy Giri Rachman Putra
2009-01-01
The Fe 3 O 4 ferro fluids samples were synthesized by a co-precipitation method. The investigation of ferro fluids microstructure is known to be one of the most important problems because the presence of aggregates and their internal structure influence greatly the properties of ferro fluids. The size and the size dispersion of particle in ferro fluids were determined assuming a log normal distribution of particle radius. The scattering pattern of the measurement by small angle neutron scattering were fitted by the theoretical scattering function of two limitation models are log normal sphere distribution and fractal aggregate. Two types of particle are detected, which are presumably primary particle of 30 Armstrong in radius and secondary fractal aggregate of 200 Armstrong with polydispersity of 0.47 up to 0.53. (author)
Population estimates of extended family structure and size.
Garceau, Anne; Wideroff, Louise; McNeel, Timothy; Dunn, Marsha; Graubard, Barry I
2008-01-01
Population-based estimates of biological family size can be useful for planning genetic studies, assessing how distributions of relatives affect disease associations with family history and estimating prevalence of potential family support. Mean family size per person is estimated from a population-based telephone survey (n = 1,019). After multivariate adjustment for demographic variables, older and non-White respondents reported greater mean numbers of total, first- and second-degree relatives. Females reported more total and first-degree relatives, while less educated respondents reported more second-degree relatives. Demographic differences in family size have implications for genetic research. Therefore, periodic collection of family structure data in representative populations would be useful. Copyright 2008 S. Karger AG, Basel.
Estimating the average grain size of metals - approved standard 1969
International Nuclear Information System (INIS)
Anon.
1975-01-01
These methods cover procedures for estimating and rules for expressing the average grain size of all metals and consisting entirely, or principally, of a single phase. The methods may also be used for any structures having appearances similar to those of the metallic structures shown in the comparison charts. The three basic procedures for grain size estimation which are discussed are comparison procedure, intercept (or Heyn) procedure, and planimetric (or Jeffries) procedure. For specimens consisting of equiaxed grains, the method of comparing the specimen with a standard chart is most convenient and is sufficiently accurate for most commercial purposes. For high degrees of accuracy in estimating grain size, the intercept or planimetric procedures may be used
International Nuclear Information System (INIS)
Chirikov, S N
2016-01-01
The results of the size distributions measurements of the particles of aqueous suspensions of ZnO, CuO, TiO 2 , and BaTiO 3 by methods of laser polarimetry and dynamic light scattering are considered. These measurements are compared with the results obtained by electron microscopy. It is shown that a laser polarimetry method gives more accurate results for size parameter values more than 1-2. (paper)
Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5
International Nuclear Information System (INIS)
Wilderman, S.J.; Bielajew, A.F.
2005-01-01
The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)
Estimation of portion size in children's dietary assessment: lessons learnt.
Foster, E; Adamson, A J; Anderson, A S; Barton, K L; Wrieden, W L
2009-02-01
Assessing the dietary intake of young children is challenging. In any 1 day, children may have several carers responsible for providing them with their dietary requirements, and once children reach school age, traditional methods such as weighing all items consumed become impractical. As an alternative to weighed records, food portion size assessment tools are available to assist subjects in estimating the amounts of foods consumed. Existing food photographs designed for use with adults and based on adult portion sizes have been found to be inappropriate for use with children. This article presents a review and summary of a body of work carried out to improve the estimation of portion sizes consumed by children. Feasibility work was undertaken to determine the accuracy and precision of three portion size assessment tools; food photographs, food models and a computer-based Interactive Portion Size Assessment System (IPSAS). These tools were based on portion sizes served to children during the National Diet and Nutrition Survey. As children often do not consume all of the food served to them, smaller portions were included in each tool for estimation of leftovers. The tools covered 22 foods, which children commonly consume. Children were served known amounts of each food and leftovers were recorded. They were then asked to estimate both the amount of food that they were served and the amount of any food leftover. Children were found to estimate food portion size with an accuracy approaching that of adults using both the food photographs and IPSAS. Further development is underway to increase the number of food photographs and to develop IPSAS to cover a much wider range of foods and to validate the use of these tools in a 'real life' setting.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-01-01
ABSTRACT Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton’s ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. Aim: To determine any difference in tooth size discrepancy in a...
Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)
Energy Technology Data Exchange (ETDEWEB)
Gleber, G; Cibik, L; Mueller, P; Krumrey, M [Physikalisch-Technische Bundesanstalt (PTB), Abbestrasse 2-12, 10587 Berlin (Germany); Haas, S; Hoell, A, E-mail: gudrun.gleber@ptb.d [Helmholtz-Zentrum-Berlin fuer Materialien und Energie (HZB), Albert-Einstein-Strasse 15, 12489 Berlin (Germany)
2010-10-01
The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below {+-} 0.3 %. The determined number-weighted mean diameters of (109.0 {+-} 0.7) nm and (188.0 {+-} 1.3) nm, respectively, are close to the nominal values.
Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)
Gleber, G.; Cibik, L.; Haas, S.; Hoell, A.; Müller, P.; Krumrey, M.
2010-10-01
The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below ± 0.3 %. The determined number-weighted mean diameters of (109.0 ± 0.7) nm and (188.0 ± 1.3) nm, respectively, are close to the nominal values.
Traceable size determination of PMMA nanoparticles based on Small Angle X-ray Scattering (SAXS)
International Nuclear Information System (INIS)
Gleber, G; Cibik, L; Mueller, P; Krumrey, M; Haas, S; Hoell, A
2010-01-01
The size and size distribution of PMMA nanoparticles has been investigated with SAXS (small angle X-ray scattering) using monochromatized synchrotron radiation. The uncertainty has contributions from the wavelength or photon energy of the radiation, the scattering angle and the fit procedure for the obtained scattering curves. The wavelength can be traced back to the lattice constant of silicon, and the scattering angle is traceable via geometric measurements of the detector pixel size and the distance between the sample and the detector. SAXS measurements and data evaluations have been performed at different distances and photon energies for two PMMA nanoparticle suspensions with low polydispersity and nominal diameters of 108 nm and 192 nm, respectively, as well as for a mixture of both. The relative variation of the diameters obtained for different experimental conditions was below ± 0.3 %. The determined number-weighted mean diameters of (109.0 ± 0.7) nm and (188.0 ± 1.3) nm, respectively, are close to the nominal values.
Estimated spatial requirements of the medium- to large-sized ...
African Journals Online (AJOL)
Conservation planning in the Cape Floristic Region (CFR) of South Africa, a recognised world plant diversity hotspot, required information on the estimated spatial requirements of selected medium- to large-sized mammals within each of 102 Broad Habitat Units (BHUs) delineated according to key biophysical parameters.
Estimating population size of Saddle-billed Storks Ephippiorhynchus ...
African Journals Online (AJOL)
The aim of this study was to estimate the population size within associated confidence limits using a modified mark–recapture field method. The vehicle survey, conducted shortly after rainfall in the area, did not produce results with known precision under these conditions. A repeat of this census in spring, after the peak ...
Estimating the size of the homeless population in Budapest, Hungary
David, B; Snijders, TAB
In this study we try to estimate the size of the homeless population in Budapest by using two - non-standard - sampling methods: snowball sampling and capture-recapture method. Using two methods and three different data sets we are able to compare the methods as well as the results, and we also
Estimation of particle size distribution of nanoparticles from electrical ...
Indian Academy of Sciences (India)
... blockade (CB) phenomena of electrical conduction through atiny nanoparticle. Considering the ZnO nanocomposites to be spherical, Coulomb-blockade model of quantum dot isapplied here. The size distribution of particle is estimated from that model and compared with the results obtainedfrom AFM and XRD analyses.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Effective single scattering albedo estimation using regional climate model
CSIR Research Space (South Africa)
Tesfaye, M
2011-09-01
Full Text Available In this study, by modifying the optical parameterization of Regional Climate model (RegCM), the authors have computed and compared the Effective Single-Scattering Albedo (ESSA) which is a representative of VIS spectral region. The arid, semi...
Webometrics: Some Critical Issues of WWW Size Estimation Methods
Directory of Open Access Journals (Sweden)
Srinivasan Mohana Arunachalam
2018-04-01
Full Text Available The number of webpages in the Internet has increased tremendously over the last two decades however only a part of it is indexed by various search engines. This small portion is the indexable web of the Internet and can be usually reachable from a Search Engine. Search engines play a big role in making the World Wide Web accessible to the end user, and how much of the World Wide Web is accessible on the size of the search engine’s index. Researchers have proposed several ways to estimate this size of the indexable web using search engines with and without privileged access to the search engine’s database. Our report provides a summary of methods used in the last two decades to estimate the size of the World Wide Web, as well as describe how this knowledge can be used in other aspects/tasks concerning the World Wide Web.
Wang, R T; van de Hulst, H C
1995-05-20
A new algorithm for cylindrical Bessel functions that is similar to the one for spherical Bessel functions allows us to compute scattering functions for infinitely long cylinders covering sizes ka = 2πa/λ up to 8000 through the use of only an eight-digit single-precision machine computation. The scattering function and complex extinction coefficient of a finite cylinder that is seen near perpendicular incidence are derived from those of an infinitely long cylinder by the use of Huygens's principle. The result, which contains no arbitrary normalization factor, agrees quite well with analog microwave measurements of both extinction and scattering for such cylinders, even for an aspect ratio p = l/(2a) as low as 2. Rainbows produced by cylinders are similar to those for spherical drops but are brighter and have a lower contrast.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
A POSSIBLE DIVOT IN THE SIZE DISTRIBUTION OF THE KUIPER BELT'S SCATTERING OBJECTS
Energy Technology Data Exchange (ETDEWEB)
Shankman, C.; Gladman, B. J. [Department of Physics and Astronomy, University of British Columbia, 6224 Agriculture Road, Vancouver, BC V6T 1Z1 (Canada); Kaib, N. [Department of Physics and Astronomy, Queens University (Canada); Kavelaars, J. J. [National Research Council of Canada, Victoria, BC V9E 2E7 (Canada); Petit, J. M. [Institut UTINAM, CNRS-Universite de Franche-Comte, Besancon (France)
2013-02-10
Via joint analysis of a calibrated telescopic survey, which found scattering Kuiper Belt objects, and models of their expected orbital distribution, we explore the scattering-object (SO) size distribution. Although for D > 100 km the number of objects quickly rise as diameters decrease, we find a relative lack of smaller objects, ruling out a single power law at greater than 99% confidence. After studying traditional ''knees'' in the size distribution, we explore other formulations and find that, surprisingly, our analysis is consistent with a very sudden decrease (a divot) in the number distribution as diameters decrease below 100 km, which then rises again as a power law. Motivated by other dynamically hot populations and the Centaurs, we argue for a divot size distribution where the number of smaller objects rises again as expected via collisional equilibrium. Extrapolation yields enough kilometer-scale SOs to supply the nearby Jupiter-family comets. Our interpretation is that this divot feature is a preserved relic of the size distribution made by planetesimal formation, now ''frozen in'' to portions of the Kuiper Belt sharing a ''hot'' orbital inclination distribution, explaining several puzzles in Kuiper Belt science. Additionally, we show that to match today's SO inclination distribution, the supply source that was scattered outward must have already been vertically heated to the of order 10 Degree-Sign .
Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques
Greeley, A.; Kurtz, N. T.; Neumann, T.; Cook, W. B.; Markus, T.
2016-12-01
Photon counting laser altimeters such as MABEL (Multiple Altimeter Beam Experimental Lidar) - a single photon counting simulator for ATLAS (Advanced Topographical Laser Altimeter System) - use individual photons with visible wavelengths to measure their range to target surfaces. ATLAS, the sole instrument on NASA's upcoming ICESat-2 mission, will provide scientists a view of Earth's ice sheets, glaciers, and sea ice with unprecedented detail. Precise calibration of these instruments is needed to understand rapidly changing parameters such as sea ice freeboard, and to measure optical properties of surfaces like snow covered ice sheets using subsurface scattered photons. Photons that travel through snow, ice, or water before scattering back to an altimeter receiving system travel farther than photons taking the shortest path between the observatory and the target of interest. These delayed photons produce a negative elevation bias relative to photons scattered directly off these surfaces. We use laboratory measurements of snow surfaces using a flight-tested laser altimeter (MABEL), and Monte Carlo simulations of backscattered photons from snow to estimate elevation biases from subsurface scattered photons. We also use these techniques to demonstrate the ability to retrieve snow surface properties like snow grain size.
An experimental study of asphaltene particle sizes in n-heptane-toluene mixtures by light scattering
Directory of Open Access Journals (Sweden)
Rajagopal K.
2004-01-01
Full Text Available The particle size of asphaltene flocculates has been the subject of many recent studies because of its importance in the control of deposition in petroleum production and processing. We measured the size of asphaltene flocculates in toluene and toluene - n-heptane mixtures, using the light-scattering technique. The asphaltenes had been extracted from Brazilian oil from the Campos Basin, according to British Standards Method IP-143/82. The asphaltene concentration in solution ranged between 10-6 g/ml and 10-7 g/ml. Sizes was measured for a period of about 10000 minutes at a constant temperature of 20°C. We found that the average size of the particles remained constant with time and increase with an increase in amount of n-heptane. The correlation obtained for size with concentration will be useful in asphaltene precipitation models.
International Nuclear Information System (INIS)
Haldipur, P.; Margetan, F. J.; Thompson, R. B.
2006-01-01
Single-crystal elastic stiffness constants are important input parameters for many calculations in material science. There are well established methods to measure these constants using single-crystal specimens, but such specimens are not always readily available. The ultrasonic properties of metal polycrystals, such as velocity, attenuation, and backscattered grain noise characteristics, depend in part on the single-crystal elastic constants. In this work we consider the estimation of elastic constants from UT measurements and grain-sizing data. We confine ourselves to a class of particularly simple polycrystalline microstructures, found in some jet-engine Nickel alloys, which are single-phase, cubic, equiaxed, and untextured. In past work we described a method to estimate the single-crystal elastic constants from measured ultrasonic velocity and attenuation data accompanied by metallographic analysis of grain size. However, that methodology assumes that all attenuation is due to grain scattering, and thus is not valid if appreciable absorption is present. In this work we describe an alternative approach which uses backscattered grain noise data in place of attenuation data. Efforts to validate the method using a pure copper specimen are discussed, and new results for two jet-engine Nickel alloys are presented
An algorithm for 3D target scatterer feature estimation from sparse SAR apertures
Jackson, Julie Ann; Moses, Randolph L.
2009-05-01
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
Cramer, Robert Grewelle.
1982-01-01
Approved for public release; distribution unlimited A dual beam apparatus was developed which simultaneously measured particle size (D32) at the entrance and exit of an exhaust nozzle of a small solid propellant rocket motor. The diameters were determined using measurements of dif fractiveiy scattered laser power spectra. The apparatus was calibrated by using spherical glass beads and aluminum oxide powder. Measurements were successfully made at both locations. Because of...
International Nuclear Information System (INIS)
Mickael, M.; Gardner, R.P.; Verghese, K.
1988-01-01
An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude
Estimation of sample size and testing power (Part 3).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2011-12-01
This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.
Estimation of sample size and testing power (part 6).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-03-01
The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).
Estimation of LOCA break size using cascaded Fuzzy neural networks
Energy Technology Data Exchange (ETDEWEB)
Choi, Geon Pil; Yoo, Kwae Hwan; Back, Ju Hyun; Na, Man Gyun [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)
2017-04-15
Operators of nuclear power plants may not be equipped with sufficient information during a loss-of-coolant accident (LOCA), which can be fatal, or they may not have sufficient time to analyze the information they do have, even if this information is adequate. It is not easy to predict the progression of LOCAs in nuclear power plants. Therefore, accurate information on the LOCA break position and size should be provided to efficiently manage the accident. In this paper, the LOCA break size is predicted using a cascaded fuzzy neural network (CFNN) model. The input data of the CFNN model are the time-integrated values of each measurement signal for an initial short-time interval after a reactor scram. The training of the CFNN model is accomplished by a hybrid method combined with a genetic algorithm and a least squares method. As a result, LOCA break size is estimated exactly by the proposed CFNN model.
Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng
2012-09-01
This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of
Zhan, Hanyu; Voelz, David G.
2016-12-01
The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.
Estimating minimum polycrystalline aggregate size for macroscopic material homogeneity
International Nuclear Information System (INIS)
Kovac, M.; Simonovski, I.; Cizelj, L.
2002-01-01
During severe accidents the pressure boundary of reactor coolant system can be subjected to extreme loadings, which might cause failure. Reliable estimation of the extreme deformations can be crucial to determine the consequences of severe accidents. Important drawback of classical continuum mechanics is idealization of inhomogenous microstructure of materials. Classical continuum mechanics therefore cannot predict accurately the differences between measured responses of specimens, which are different in size but geometrical similar (size effect). A numerical approach, which models elastic-plastic behavior on mesoscopic level, is proposed to estimate minimum size of polycrystalline aggregate above which it can be considered macroscopically homogeneous. The main idea is to divide continuum into a set of sub-continua. Analysis of macroscopic element is divided into modeling the random grain structure (using Voronoi tessellation and random orientation of crystal lattice) and calculation of strain/stress field. Finite element method is used to obtain numerical solutions of strain and stress fields. The analysis is limited to 2D models.(author)
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
On the estimation of ice thickness from scattering observations
Williams, T. D.; Squire, V. A.
2010-04-01
This paper is inspired by the proposition that it may be possible to extract descriptive physical parameters - in particular the ice thickness, of a sea-ice field from ocean wave information. The motivation is that mathematical theory describing wave propagation in such media has reached a point where the inherent heterogeneity, expressed as pressure ridge keels and sails, leads, thickness variations and changes of material property and draught, can be fully assimilated exactly or through approximations whose limitations are understood. On the basis that leads have the major wave scattering effect for most sea-ice [Williams, T.D., Squire, V.A., 2004. Oblique scattering of plane flexural-gravity waves by heterogeneities in sea ice. Proc. R. Soc. Lon. Ser.-A 460 (2052), 3469-3497], a model two dimensional sea-ice sheet composed of a large number of such features, randomly dispersed, is constructed. The wide spacing approximation is used to predict how wave trains of different period will be affected, after first establishing that this produces results that are very close to the exact solution. Like Kohout and Meylan [Kohout, A.L., Meylan, M.H., 2008. An elastic plate model for wave attenuation and ice floe breaking in the marginal ice zone. J. Geophys. Res. 113, C09016, doi:10.1029/2007JC004434], we find that on average the magnitude of a wave transmitted by a field of leads decays exponentially with the number of leads. Then, by fitting a curve based on this assumption to the data, the thickness of the ice sheet is obtained. The attenuation coefficient can always be calculated numerically by ensemble averaging but in some cases more rapidly computed approximations work extremely well. Moreover, it is found that the underlying thickness can be determined to good accuracy by the method as long as Archimedean draught is correctly provided for, suggesting that waves can indeed be effective as a remote sensing agent to measure ice thickness in areas where pressure ridges
Stimulated Brillouin scattering of laser in semiconductor plasma embedded with nano-sized grains
Energy Technology Data Exchange (ETDEWEB)
Sharma, Giriraj, E-mail: grsharma@gmail.com [SRJ Government Girls’ College, Neemuch (M P) (India); Dad, R. C. [Government P G College, Mandsaur (M P) (India); Ghosh, S. [School of Studies in Physics, Vikram University, Ujjain, (M P) (India)
2015-07-31
A high power laser propagating through semiconductor plasma undergoes Stimulated Brillouin scattering (SBS) from the electrostrictively generated acoustic perturbations. We have considered that nano-sized grains (NSGs) ions are embedded in semiconductor plasma by means of ion implantation. The NSGs are bombarded by the surrounding plasma particles and collect electrons. By considering a negative charge on the NSGs, we present an analytically study on the effects of NSGs on threshold field for the onset of SBS and Brillouin gain of generated Brillouin scattered mode. It is found that as the charge on the NSGs builds up, the Brillouin gain is significantly raised and the threshold pump field for the onset of SBS process is lowered.
Local scattering property scales flow speed estimation in laser speckle contrast imaging
International Nuclear Information System (INIS)
Miao, Peng; Chao, Zhen; Feng, Shihan; Ji, Yuanyuan; Yu, Hang; Thakor, Nitish V; Li, Nan
2015-01-01
Laser speckle contrast imaging (LSCI) has been widely used in in vivo blood flow imaging. However, the effect of local scattering property (scattering coefficient µ s ) on blood flow speed estimation has not been well investigated. In this study, such an effect was quantified and involved in relation between speckle autocorrelation time τ c and flow speed v based on simulation flow experiments. For in vivo blood flow imaging, an improved estimation strategy was developed to eliminate the estimation bias due to the inhomogeneous distribution of the scattering property. Compared to traditional LSCI, a new estimation method significantly suppressed the imaging noise and improves the imaging contrast of vasculatures. Furthermore, the new method successfully captured the blood flow changes and vascular constriction patterns in rats’ cerebral cortex from normothermia to mild and moderate hypothermia. (letter)
Electron-phonon scattering in indium from r.f. size effect measurements
International Nuclear Information System (INIS)
Hoff, A.B.M.
1977-01-01
The anisotropy of the electron-phonon collison frequency on the second and third zone Fermi surfaces of indium has been determined from the temperature dependence of radiofrequency size effect (R.F.S.E.) line amplitudes. The orbitally-averaged scattering rates turn out to vary with temperature T according to a T 3 -dependence over the entire Fermi surface, except for orbits on the hole surface close to the (100) and (001) symmetry planes. The anomalous temperatue dependences found in the experiments could be attributed to the special circumstances under which the R.F.S.E. was observed. The influences of both the scattering effectiveness and the multiple turns of the electrons on the observed temperature dependence are discussed extensively. For a large number of extreme orbits on the second and third zone Fermi surfaces, the average scattering rates were measured. In order to obtain a functional expression for the local collision frequency over the entire Fermi surface, an inversion technique was used. As a result, it was found that the anisotropy of the collision frequency over the second zone surface is quite high (1:20) whereas the anisotropy over the third zone surface is rather small (<20%). Further, the variation of the scattering rate round the [111]-point on the hole surface could be confirmed by the results of limiting point measurements. The experimental scattering rates at several points on the Fermi surface were compared with theoretical values obtained from a simple two-OPW model calculation. The calculated anisotropy agrees roughly with the experimental one, although locally the actual values can differ by a factor of 2 or more
Second order statistics of bilinear forms of robust scatter estimators
Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric
2015-01-01
. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved
NDE errors and their propagation in sizing and growth estimates
International Nuclear Information System (INIS)
Horn, D.; Obrutsky, L.; Lakhan, R.
2009-01-01
The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in
Estimating Functions of Distributions Defined over Spaces of Unknown Size
Directory of Open Access Journals (Sweden)
David H. Wolpert
2013-10-01
Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.
Bound on the estimation grid size for sparse reconstruction in direction of arrival estimation
Coutiño Minguez, M.A.; Pribic, R; Leus, G.J.T.
2016-01-01
A bound for sparse reconstruction involving both the signal-to-noise ratio (SNR) and the estimation grid size is presented. The bound is illustrated for the case of a uniform linear array (ULA). By reducing the number of possible sparse vectors present in the feasible set of a constrained ℓ1-norm
Pollock, Jacob F.; Ashton, Randolph S.; Rode, Nikhil A.; Schaffer, David V.; Healy, Kevin E.
2013-01-01
The degree of substitution and valency of bioconjugate reaction products are often poorly judged or require multiple time- and product- consuming chemical characterization methods. These aspects become critical when analyzing and optimizing the potency of costly polyvalent bioactive conjugates. In this study, size-exclusion chromatography with multi-angle laser light scattering was paired with refractive index detection and ultraviolet spectroscopy (SEC-MALS-RI-UV) to characterize the reaction efficiency, degree of substitution, and valency of the products of conjugation of either peptides or proteins to a biopolymer scaffold, i.e., hyaluronic acid (HyA). Molecular characterization was more complete compared to estimates from a protein quantification assay, and exploitation of this method led to more accurate deduction of the molecular structures of polymer bioconjugates. Information obtained using this technique can improve macromolecular engineering design principles and better understand multivalent macromolecular interactions in biological systems. PMID:22794081
Energy Technology Data Exchange (ETDEWEB)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a
Energy Technology Data Exchange (ETDEWEB)
Olk, Phillip
2008-07-01
This thesis examines and exploits the optical properties of pairs of MNPs. Pairs of MNPs offer two further parameters not existent at single MNPs, which both affect the local optical fields in their vicinity: the distance between them, and their relative orientation with respect to the polarisation of the excitation light. These properties are subject of three chapters: One section examines the distance-dependent and orientation-sensitive scattering cross section (SCS) of two equally sized MNPs. Both near- and far-field interactions affect the spectral position and spectral width of the SCS. Far-field coupling affects the SCS even in such a way that a two-particle system may show both a blue- and redshifted SCS, depending only on the distance between the two MNPs. The maximum distance for this effect is the coherence length of the illumination source - a fact of importance for SCS-based experiments using laser sources. Another part of this thesis examines the near-field between two MNPs and the dependence of the locally enhanced field on the relative particle orientation with respect to the polarisation of the excitation light. To attain a figure of merit, the intensity of fluorescence light from dye molecules in the surrounding medium was measured at various directions of polarisation. The field enhancement was turned into fluorescence enhancement, even providing a means for sensing the presence of very small MNPs of 12 nm in diameter. In order to quantify the near-field experimentally, a different technique is devised in a third section of this thesis - scanning particle-enhanced Raman microscopy (SPRM). This device comprises a scanning probe carrying an MNP which in turn is coated with a molecule of known Raman signature. By manoeuvring this outfit MNP into the vicinity of an illuminated second MNP and by measuring the Raman signal intensity, a spatial mapping of the field enhancement was possible. (orig.)
Rossi, Vincent M; Jacques, Steven L
2016-06-13
Goniometry and optical scatter imaging have been used for optical determination of particle size based upon optical scattering. Polystyrene microspheres in suspension serve as a standard for system validation purposes. The design and calibration of a digital Fourier holographic microscope (DFHM) are reported. Of crucial importance is the appropriate scaling of scattering angle space in the conjugate Fourier plane. A detailed description of this calibration process is described. Spatial filtering of the acquired digital hologram to use photons scattered within a restricted angular range produces an image. A pair of images, one using photons narrowly scattered within 8 - 15° (LNA), and one using photons broadly scattered within 8 - 39° (HNA), are produced. An image based on the ratio of these two images, OSIR = HNA/LNA, following Boustany et al. (2002), yields a 2D Optical Scatter Image (OSI) whose contrast is based on the angular dependence of photon scattering and is sensitive to the microsphere size, especially in the 0.5-1.0µm range. Goniometric results are also given for polystyrene microspheres in suspension as additional proof of principle for particle sizing via the DFHM.
Laser photogrammetry improves size and demographic estimates for whale sharks
Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.
2015-01-01
Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776
Benito Lopez, Pablo; Radhakrishnan, Hema; Nourrit, Vincent
2015-02-01
To determine whether an unmodified commercial wavefront aberrometer (irx3) can be used to estimate forward light scattering and how this assessment matches estimations obtained from the C-Quant straylight meter. University of Manchester, Manchester, United Kingdom. Prospective comparative study. Measurements obtained with a straylight meter and with Shack-Hartmann spot patterns using a previously reported metric were compared. The method was first validated in a model eye by spraying an aerosol over 4 contact lenses to generate various levels of scattering. Measurements with both methods were subsequently obtained in healthy eyes. The study comprised 33 healthy participants (mean age 38.9 years ± 13.1 [SD]). A good correlation was observed between the density of droplets over the contact lenses and the objective scatter value extracted from the hartmanngrams (r = 0.972, P meter and the metric derived from the Shack-Hartmann method (r = 0.133, P = .460). The hartmanngrams provided a valid objective measurement of the light scatter in a model eye; the measurements in human eyes were not significantly correlated with those of the light scatter meter. The straylight meter assesses large-angle scattering, while the Shack-Hartmann method collates information from a narrow angle around the center of the point-spread function; this could be the reason for the difference in measurements. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Market Potential Estimation of a Small and Medium Size Reactor
International Nuclear Information System (INIS)
Oh, K. B.; Yang, M. H.; Lee, M. K.; Chung, W. S.; Kim, H. J.; Kim, S. S.; Lee, B. W.; Ryu, J. S.; Juhn, P. E.
2004-12-01
Technically, nuclear reactors which produce energy in the form of heat can supply energy products other than electricity, including district heat, process heat, potable water etc. Currently non-civil uses of nuclear energy are very limited civil applications except for a military nuclear powered ship and an energy usage in isolated areas. However, the future global environment and an energy resources scarcity could promote a significant usage of non-electrical applications on an industrial scale. Considering these situations, this report analyzed the following: (1) Worldwide non-electrical application of a small and medium sized nuclear reactor - survey the situation of the current technical applications, - survey the global market potential estimation for various applications (2) Technical cooperation potential in several countries - identify necessary conditions for nuclear cooperation, - select candidate countries: Morocco, UAE, Indonesia, Chile and Vietnam, - survey the energy and water situation, - survey the legal and international regime infrastructure
Cornelissen, Katri; Bester, Andre; Cairns, Paul; Tovee, Martin; Cornelissen, Piers
2015-01-01
In this cross-sectional study, we investigated the influence of personal BMI on body size estimation in 42 women who have symptoms of anorexia (referred to henceforth as anorexia spectrum disorders, ANSD), and 100 healthy controls. Low BMI control participants over-estimate their size and high BMI controls under-estimate, a pattern which is predicted by a perceptual phenomenon called contraction bias. In addition, control participants' sensitivity to size change declines as their BMI increase...
Estimating the Size and Impact of the Ecological Restoration Economy.
Directory of Open Access Journals (Sweden)
Todd BenDor
Full Text Available Domestic public debate continues over the economic impacts of environmental regulations that require environmental restoration. This debate has occurred in the absence of broad-scale empirical research on economic output and employment resulting from environmental restoration, restoration-related conservation, and mitigation actions - the activities that are part of what we term the "restoration economy." In this article, we provide a high-level accounting of the size and scope of the restoration economy in terms of employment, value added, and overall economic output on a national scale. We conducted a national survey of businesses that participate in restoration work in order to estimate the total sales and number of jobs directly associated with the restoration economy, and to provide a profile of this nascent sector in terms of type of restoration work, industrial classification, workforce needs, and growth potential. We use survey results as inputs into a national input-output model (IMPLAN 3.1 in order to estimate the indirect and induced economic impacts of restoration activities. Based on this analysis we conclude that the domestic ecological restoration sector directly employs ~ 126,000 workers and generates ~ $9.5 billion in economic output (sales annually. This activity supports an additional 95,000 jobs and $15 billion in economic output through indirect (business-to-business linkages and increased household spending.
International Nuclear Information System (INIS)
Broome, J.
1965-11-01
The programme SCATTER is a KDF9 programme in the Egtran dialect of Fortran to generate normalized angular distributions for elastically scattered neutrons from data input as the coefficients of a Legendre polynomial series, or from differential cross-section data. Also, differential cross-section data may be analysed to produce Legendre polynomial coefficients. Output on cards punched in the format of the U.K. A. E. A. Nuclear Data Library is optional. (author)
First genome size estimations for some eudicot families and genera
Directory of Open Access Journals (Sweden)
Garcia, S.
2010-12-01
Full Text Available Genome size diversity in angiosperms varies roughly 2400-fold, although approximately 45% of angiosperm families lack a single genome size estimation, and therefore, this range could be enlarged. To contribute completing family and genera representation, DNA C-Values are here provided for 19 species from 16 eudicot families, including first values for 6 families, 14 genera and 17 species. The sample of species studied is very diverse, including herbs, weeds, vines, shrubs and trees. Data are discussed regarding previous genome size estimates of closely related species or genera, if any, their chromosome number, growth form or invasive behaviour. The present research contributes approximately 1.5% new values for previously unreported angiosperm families, being the current coverage around 55% of angiosperm families, according to the Plant DNA C-Values Database.
La diversidad del tamaño del genoma en angiospermas es muy amplia, siendo el valor más elevado aproximadamente unas 2400 veces superior al más pequeño. Sin embargo, cerca del 45% de las familias no presentan ni una sola estimación, por lo que el rango real podría ser ampliado. Para contribuir a completar la representación de familias y géneros de angiospermas, este estudio contribuye con valores C para 19 especies de 16 familias de eudicoticotiledóneas, incluyendo los primeros valores para 6 familias, 14 géneros y 17 especies. La muestra estudiada es muy diversa, e incluye hierbas, malezas, enredaderas, arbustos y árboles. Se discuten los resultados en función de estimaciones previas del tamaño del genoma de especies o géneros estrechamente relacionados, del número de cromosomas, la forma de crecimiento o el comportamiento invasor de las especies analizadas. El presente estudio contribuye aproximadamente en un 1,5% de nuevos valores para familias de angiospermas no estudiadas previamente, de las que actualmente existe información para el 55%, según la base de datos
Estimating the size of the solution space of metabolic networks
Directory of Open Access Journals (Sweden)
Mulet Roberto
2008-05-01
Full Text Available Abstract Background Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. Results In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. Conclusion We propose a
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Estimation of scattering object characteristics for image reconstruction using a nonzero background.
Jin, Jing; Astheimer, Jeffrey; Waag, Robert
2010-06-01
Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.
Linear estimates of structure functions from deep inelastic lepton-nucleon scattering data. Part 1
International Nuclear Information System (INIS)
Anikeev, V.B.; Zhigunov, V.P.
1991-01-01
This paper concerns the linear estimation of structure functions from muon(electron)-nucleon scattering. The expressions obtained for the structure functions estimate provide correct analysis of the random error and the bias The bias arises because of the finite number of experimental data and the finite resolution of experiment. The approach suggested may become useful for data handling from experiments at HERA. 9 refs
Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation
Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou
2018-06-01
Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.
Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation
Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou
2018-03-01
Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.
International Nuclear Information System (INIS)
Charalampous, Georgios; Hardalupas, Yannis
2011-01-01
The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.
Bergstrom, Robert W.; Pilewskie, Peter; Schmid, Beat; Russell, Philip B.
2003-01-01
Using measurements of the spectral solar radiative flux and optical depth for 2 days (24 August and 6 September 2000) during the SAFARI 2000 intensive field experiment and a detailed radiative transfer model, we estimate the spectral single scattering albedo of the aerosol layer. The single scattering albedo is similar on the 2 days even though the optical depth for the aerosol layer was quite different. The aerosol single scattering albedo was between 0.85 and 0.90 at 350 nm, decreasing to 0.6 in the near infrared. The magnitude and decrease with wavelength of the single scattering albedo are consistent with the absorption properties of small black carbon particles. We estimate the uncertainty in the single scattering albedo due to the uncertainty in the measured fractional absorption and optical depths. The uncertainty in the single scattering albedo is significantly less on the high-optical-depth day (6 September) than on the low-optical-depth day (24 August). On the high-optical-depth day, the uncertainty in the single scattering albedo is 0.02 in the midvisible whereas on the low-optical-depth day the uncertainty is 0.08 in the midvisible. On both days, the uncertainty becomes larger in the near infrared. We compute the radiative effect of the aerosol by comparing calculations with and without the aerosol. The effect at the top of the atmosphere (TOA) is to cool the atmosphere by 13 W/sq m on 24 August and 17 W/sq m on 6 September. The effect on the downward flux at the surface is a reduction of 57 W/sq m on 24 August and 200 W/sq m on 6 September. The aerosol effect on the downward flux at the surface is in good agreement with the results reported from the Indian Ocean Experiment (INDOEX).
Small-angle X-ray scattering (SAXS) for metrological size determination of nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Gleber, Gudrun; Krumrey, Michael; Cibik, Levent; Marggraf, Stefanie; Mueller, Peter [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Hoell, Armin [Helmholtz-Zentrum Berlin, Albert-Einstein-Str. 15, 12489 Berlin (Germany)
2011-07-01
To measure the size of nanoparticles, different measurement methods are available but their results are often not compatible. In the framework of an European metrology project we use Small-Angle X-ray Scattering (SAXS) to determine the size and size distribution of nanoparticles in aqueous solution, where the special challange is the traceability of the results. The experiments were performed at the Four-Crystal Monochromator (FCM) beamline in the laboratory of Physikalisch-Technische Bundesanstalt (PTB) at BESSY II using the SAXS setup of the Helmholtz-Zentrum Berlin (HZB). We measured different particles made of PMMA and gold in a diameter range of 200 nm down to about 10 nm. The aspects of traceability can be classified in two parts: the first is the experimental part with the uncertainties of distances, angles, and wavelength, the second is the part of analysis, with the uncertainty of the choice of the model used for fitting the data. In this talk we want to show the degree of uncertainty, which we reached in this work yet.
DEFF Research Database (Denmark)
Cannavacciuolo, L.; Sommer, C.; Pedersen, J.S.
2000-01-01
outlined in the Odijk-Skolnick-Fixman theory, in which the behavior of charged polymers is described only in terms of increasing local rigidity and excluded volume effects. Moreover, the Monte Carlo data are found to be in very good agreement with experimental scattering measurements with equilibrium......We present a systematic Monte Carlo study of the scattering function S(q) of semiflexible polyelectrolytes at infinite dilution, in solutions with different concentrations of added salt. In the spirit of a theoretical description of polyelectrolytes in terms of the equivalent parameters, namely......, persistence length and excluded volume interactions, we used a modified wormlike chain model, in which the monomers are represented by charged hard spheres placed at distance a. The electrostatic interactions are approximated by a Debye-Huckel potential. We show that the scattering function is quantitatively...
Aerosol Light Absorption and Scattering Assessments and the Impact of City Size on Air Pollution
Paredes-Miranda, Guadalupe
The general problem of urban pollution and its relation to the city population is examined in this dissertation. A simple model suggests that pollutant concentrations should scale approximately with the square root of city population. This model and its experimental evaluation presented here serve as important guidelines for urban planning and attainment of air quality standards including the limits that air pollution places on city population. The model was evaluated using measurements of air pollution. Optical properties of aerosol pollutants such as light absorption and scattering plus chemical species mass concentrations were measured with a photoacoustic spectrometer, a reciprocal nephelometer, and an aerosol mass spectrometer in Mexico City in the context of the multinational project "Megacity Initiative: Local And Global Research Observations (MILAGRO)" in March 2006. Aerosol light absorption and scattering measurements were also obtained for Reno and Las Vegas, NV USA in December 2008-March 2009 and January-February 2003, respectively. In all three cities, the morning scattering peak occurs a few hours later than the absorption peak due to the formation of secondary photochemically produced aerosols. In particular, for Mexico City we determined the fraction of photochemically generated secondary aerosols to be about 75% of total aerosol mass concentration at its peak near midday. The simple 2-d box model suggests that commonly emitted primary air pollutant (e.g., black carbon) mass concentrations scale approximately as the square root of the urban population. This argument extends to the absorption coefficient, as it is approximately proportional to the black carbon mass concentration. Since urban secondary pollutants form through photochemical reactions involving primary precursors, in linear approximation their mass concentration also should scale with the square root of population. Therefore, the scattering coefficient, a proxy for particulate matter
Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm
Directory of Open Access Journals (Sweden)
Jinsheng Yang
2012-01-01
Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.
AUTOMATIC ESTIMATION OF SIZE PARAMETERS USING VERIFIED COMPUTERIZED STEREOANALYSIS
Directory of Open Access Journals (Sweden)
Peter R Mouton
2011-05-01
Full Text Available State-of-the-art computerized stereology systems combine high-resolution video microscopy and hardwaresoftware integration with stereological methods to assist users in quantifying multidimensional parameters of importance to biomedical research, including volume, surface area, length, number, their variation and spatial distribution. The requirement for constant interactions between a trained, non-expert user and the targeted features of interest currently limits the throughput efficiency of these systems. To address this issue we developed a novel approach for automatic stereological analysis of 2-D images, Verified Computerized Stereoanalysis (VCS. The VCS approach minimizes the need for user interactions with high contrast [high signal-to-noise ratio (S:N] biological objects of interest. Performance testing of the VCS approach confirmed dramatic increases in the efficiency of total object volume (size estimation, without a loss of accuracy or precision compared to conventional computerized stereology. The broad application of high efficiency VCS to high-contrast biological objects on tissue sections could reduce labor costs, enhance hypothesis testing, and accelerate the progress of biomedical research focused on improvements in health and the management of disease.
Chen, Wei-kang; Fang, Hui
2016-03-01
The basic principle of polarization-differentiation elastic light scattering spectroscopy based techniques is that under the linear polarized light incidence, the singlely scattered light from the superficial biological tissue and diffusively scattered light from the deep tissue can be separated according to the difference of polarization characteristics. The novel point of the paper is to apply this method to the detection of particle suspension and, to realize the simultaneous measurement of its particle size and number density in its natural status. We design and build a coaxial cage optical system, and measure the backscatter signal at a specified angle from a polystyrene microsphere suspension. By controlling the polarization direction of incident light with a linear polarizer and adjusting the polarization direction of collected light with another linear polarizer, we obtain the parallel polarized elastic light scattering spectrum and cross polarized elastic light scattering spectrum. The difference between the two is the differential polarized elastic light scattering spectrum which include only the single scattering information of the particles. We thus compare this spectrum to the Mie scattering calculation and extract the particle size. We then also analyze the cross polarized elastic light scattering spectrum by applying the particle size already extracted. The analysis is based on the approximate expressions taking account of light diffusing, from which we are able to obtain the number density of the particle suspension. We compare our experimental outcomes with the manufacturer-provided values and further analyze the influence of the particle diameter standard deviation on the number density extraction, by which we finally verify the experimental method. The potential applications of the method include the on-line particle quality monitoring for particle manufacture as well as the fat and protein density detection of milk products.
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Energy Technology Data Exchange (ETDEWEB)
Tadayyon, Hadi [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Sadeghi-Naini, Ali; Czarnota, Gregory, E-mail: Gregory.Czarnota@sunnybrook.ca [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Radiation Oncology, Faculty of Medicine, University of Toronto, Toronto, Ontario M5T 1P5 (Canada); Wirtzfeld, Lauren [Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Wright, Frances C. [Division of Surgical Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada)
2014-01-15
Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor
International Nuclear Information System (INIS)
Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.
2014-01-01
Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor
Energy Technology Data Exchange (ETDEWEB)
Haverland, R. L.; Post, D. F.; Cooper, L. R.; Shirley, E. D.
1985-07-01
Particle -size distribution and plant available water are basic input to studies of range, forest and cultivated land. Since the conventional laboratory procedures for determining these parameters are time consuming, an improved method for making these measurements is desirable. Weiss and Frock (1976) reported results from an instrument employing the principle of laser light scattering to measure particle -size distribution. The instrument was reported to be of high precision, and yielded reproducible results. The laser light- scattering instrument used in this study is the Microtrac Particle -size Analyzer Model 7991- 0, manufactured by Leeds and Northrup. The particle -size analysis range of this model is from 1.9 to 176 μm, which does not correspond to the entire fine earth fraction (< 2 mm) usually characterized by soil scientists. It is, therefore, desirable to develop predictive equations to estimate the soil texture of the fine earth fraction. We believe data from this instrument could be used to predict other soil properties. This paper reports on using Microtrac data to estimate the plant available water holding capacity and soil texture of Arizona soils. Two hundred and forty-seven Arizona soils were used in this study. Most of these soils (approximately 230 soils) are thermic or hyperthermic and arid or semiarid soils of dominantly mixed mineralogy, as described on the Arizona General Soils Map (Jay et al., 1975). An array of soil horizons are included, with approximately one half of the samples coming from the A or Ap surface horizons. The other half of the samples are from the subsurface B or C horizons.
Directory of Open Access Journals (Sweden)
Alexandre Bambina
2018-01-01
Full Text Available Limitation of the cloak-size reduction is investigated numerically by a finite-difference time-domain (FDTD method. A metallic pole that imitates an antenna is cloaked with an anisotropic and parameter-gradient medium against electromagnetic-wave propagation in microwave range. The cloaking structure is a metamaterial submerged in a plasma confined in a vacuum chamber made of glass. The smooth-permittivity plasma can be compressed in the radial direction, which enables us to decrease the size of the cloak. Theoretical analysis is performed numerically by comparing scattering waves in various cases; there exists a high reduction of the scattering wave when the radius of the cloak is larger than a quarter of one wavelength. This result indicates that the required size of the cloaking layer is more than an object scale in the Rayleigh scattering regime.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more
DEFF Research Database (Denmark)
Kristensen, Philip Trøst; Lodahl, Peter; Mørk, Jesper
2010-01-01
We present an accurate, stable, and efficient solution to the Lippmann–Schwinger equation for electromagnetic scattering in two dimensions. The method is well suited for multiple scattering problems and may be applied to problems with scatterers of arbitrary shape or non-homogenous background mat...
Aerial estimation of the size of gull breeding colonies
Kadlec, J.A.; Drury, W.H.
1968-01-01
Counts on photographs and visual estimates of the numbers of territorial gulls are usually reliable indicators of the number of gull nests, but single visual estimates are not adequate to measure the number of nests in individual colonies. To properly interpret gull counts requires that several islands with known numbers of nests be photographed to establish the ratio of gulls to nests applicable for a given local census. Visual estimates are adequate to determine total breeding gull numbers by regions. Neither visual estimates nor photography will reliably detect annual changes of less than about 2.5 percent.
Inferring Saving in Training Time From Effect Size Estimates
National Research Council Canada - National Science Library
Burright, Burke
2000-01-01
.... Students' time saving represents a major potential benefit of using them. This paper fills a methodology gap in estimating the students' timesaving benefit of asynchronous training technologies...
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
Particle size analysis in estimating the significance of airborne contamination
International Nuclear Information System (INIS)
1978-01-01
In this report information on pertinent methods and techniques for analysing particle size distributions is compiled. The principles underlying the measurement methods are described, and the merits of different methods in relation to the information being sought and to their usefulness in the laboratory and in the field are explained. Descriptions on sampling methods, gravitational and inertial particle separation methods, electrostatic sizing devices, diffusion batteries, optical sizing techniques and autoradiography are included. Finally, the report considers sampling for respirable activity and problems related to instrument calibration
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Foster, E; Matthews, J N S; Lloyd, J; Marshall, L; Mathers, J C; Nelson, M; Barton, K L; Wrieden, W L; Cornelissen, P; Harris, J; Adamson, A J
2008-01-01
A number of methods have been developed to assist subjects in providing an estimate of portion size but their application in improving portion size estimation by children has not been investigated systematically. The aim was to develop portion size assessment tools for use with children and to assess the accuracy of children's estimates of portion size using the tools. The tools were food photographs, food models and an interactive portion size assessment system (IPSAS). Children (n 201), aged 4-16 years, were supplied with known quantities of food to eat, in school. Food leftovers were weighed. Children estimated the amount of each food using each tool, 24 h after consuming the food. The age-specific portion sizes represented were based on portion sizes consumed by children in a national survey. Significant differences were found between the accuracy of estimates using the three tools. Children of all ages performed well using the IPSAS and food photographs. The accuracy and precision of estimates made using the food models were poor. For all tools, estimates of the amount of food served were more accurate than estimates of the amount consumed. Issues relating to reporting of foods left over which impact on estimates of the amounts of foods actually consumed require further study. The IPSAS has shown potential for assessment of dietary intake with children. Before practical application in assessment of dietary intake of children the tool would need to be expanded to cover a wider range of foods and to be validated in a 'real-life' situation.
Thompson, J K; Dolce, J J
1989-05-01
Thirty-two asymptomatic college females were assessed on multiple aspects of body image. Subjects' estimation of the size of three body sites (waist, hips, thighs) was affected by instructional protocol. Emotional ratings, based on how they "felt" about their body, elicited ratings that were larger than actual and ideal size measures. Size ratings based on rational instructions were no different from actual sizes, but were larger than ideal ratings. There were no differences between actual and ideal sizes. The results are discussed with regard to methodological issues involved in body image research. In addition, a working hypothesis that differentiates affective/emotional from cognitive/rational aspects of body size estimation is offered to complement current theories of body image. Implications of the findings for the understanding of body image and its relationship to eating disorders are discussed.
Multiple leakage localization and leak size estimation in water networks
Abbasi, N.; Habibi, H.; Hurkens, C.A.J.; Klabbers, M.D.; Tijsseling, A.S.; Eijndhoven, van S.J.L.
2012-01-01
Water distribution networks experience considerable losses due to leakage, often at multiple locations simultaneously. Leakage detection and localization based on sensor placement and online pressure monitoring could be fast and economical. Using the difference between estimated and measured
Deeply Virtual Compton scattering at CERN. What is the size of the proton?
Energy Technology Data Exchange (ETDEWEB)
Joerg, Philipp
2017-04-27
Tremendous efforts have been made to understand the Englert-Brout-Higgs-Guralnik-Hagen-Kibble mechanism, which led to the successful discovery of the Higgs Boson and the clarification of the origin of the mass of fundamental particles. However, it is often forgotten that the vast majority of visible matter is given by baryons, which gain most of their mass dynamically within poorly known non-perturbative quantum chromodynamics processes. The best laboratory to study the underlying mechanisms of non-perturbative quantum chromodynamics is still given by the nucleon and the central question of how the macroscopic properties of a nucleon like its mass, spin and size can be comprehensively decomposed into the microscopic description in terms of quarks, antiquarks and gluons remains still open. A major part of the COMPASS-II program is dedicated to the investigation of Generalized Parton Distributions (GPDs), which aim for the most complete description of the partonic structure of the nucleon, comprising both, spacial and kinematic distributions. By including transverse degrees of freedom, a three dimensional picture of baryonic matter is created, which will revolutionise our understanding of what comprises 99 percent of the visible matter. GPDs are experimentally accessible via lepton-induced exclusive reactions, in particular the Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Production (DVMP). At COMPASS, those processes are investigated using a high intensity muon beam of 160 GeV/c together with a 2.5 m-long liquid hydrogen target and an open field two stage spectrometer, to detect and identify charged and neutral particles. In order to optimize the selection of exclusive reactions at those energies, the target is surrounded by a new barrel-shaped time-of-flight system, which detects the recoiling target particles. A pilot run dedicated to the measurement of Generalized Parton distributions performed in 2012 allows for detailed performance studies
Estimation of Amount of Scattered Neutrons at Devices PFZ and GIT-12 by MCNP Simulations
Directory of Open Access Journals (Sweden)
Ondrej Šíla
2013-01-01
Full Text Available Our work is dedicated to pinch effect occurring during current discharge in deuterium plasma, and our results are connected with two devices – plasma focus PFZ, situated in the Faculty of Electrical Engineering, CTU, Prague, and Z-pinch GIT-12, which is situated in the Institute of High Current Electronics, Tomsk. During fusion reactions that proceed in plasma during discharge, neutrons are produced. We use neutrons as instrument for plasma diagnostics. Despite of the advantage that neutrons do not interact with electric and magnetic fields inside device, they are inevitably scattered by materials that are placed between their source and probe, and information about plasma from which they come from is distorted. For estimation of rate of neutron scattering we use MCNP code.
Energy Technology Data Exchange (ETDEWEB)
Guetlein, Achim; Ciemniak, Christian; Feilitzsch, Franz von; Lanfranchi, Jean-Come; Oberauer, Lothar; Potzel, Walter; Roth, Sabine; Schoenert, Stefan; Sivers, Moritz von; Strauss, Raimund; Wawoczny, Stefan; Willers, Michael; Zoeller, Andreas [Technische Universitaet Muenchen, Physik-Department, E15 (Germany)
2012-07-01
The Coherent Neutrino Nucleus Scattering (CNNS) is a neutral current process of the weak interaction and is thus flavor independent. A low-energetic neutrino scatters off a target nucleus. For low transferred momenta the wavelength of the transferred Z{sup 0} boson is comparable to the diameter of the target nucleus. Thus, the neutrino interacts with all nucleons coherently and the cross section for the CNNS is enhanced. To observe CNNS for the first time we are developing cryogenic detectors with a target mass of about 10 g each and an energy threshold of less than 0.5 keV. The current status of this development is presented as well as the estimated background for an experiment in the vicinity of a nuclear power reactor as a strong neutrino source.
Size Estimation of Non-Cooperative Data Collections
Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice
2012-01-01
With the increasing amount of data in deep web sources (hidden from general search engines behind web forms), ac- cessing this data has gained more attention. In the algo- rithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate de-
Radiographic Estimation of the Location and Size of kidneys in ...
African Journals Online (AJOL)
Keywords: Radiography, Location, Kidney size, Local dogs. The kidneys of dogs and cats are located retroperitoneally (Bjorling, 1993). Visualization of the kidneys on radiographs is possible due to the contrast provided by the perirenal fat (Grandage, 1975). However, this perirenal fat rarely covers the ventral surface of the ...
Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian
2012-01-01
We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.
Directory of Open Access Journals (Sweden)
Zachary J Smith
Full Text Available We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.
Ab initio estimates of the size of the observable universe
International Nuclear Information System (INIS)
Page, Don N.
2011-01-01
When one combines multiverse predictions by Bousso, Hall, and Nomura for the observed age and size of the universe in terms of the proton and electron charge and masses with anthropic predictions of Carter, Carr, and Rees for these masses in terms of the charge, one gets that the age of the universe should be roughly the inverse 64th power, and the cosmological constant should be around the 128th power, of the proton charge. Combining these with a further renormalization group argument gives a single approximate equation for the proton charge, with no continuous adjustable or observed parameters, and with a solution that is within 8% of the observed value. Using this solution gives large logarithms for the age and size of the universe and for the cosmological constant that agree with the observed values within 17%
Ab initio estimates of the size of the observable universe
Energy Technology Data Exchange (ETDEWEB)
Page, Don N., E-mail: profdonpage@gmail.com [Department of Physics, 4-183 CCIS, University of Alberta, Edmonton, Alberta T6G 2E1 Canada (Canada)
2011-09-01
When one combines multiverse predictions by Bousso, Hall, and Nomura for the observed age and size of the universe in terms of the proton and electron charge and masses with anthropic predictions of Carter, Carr, and Rees for these masses in terms of the charge, one gets that the age of the universe should be roughly the inverse 64th power, and the cosmological constant should be around the 128th power, of the proton charge. Combining these with a further renormalization group argument gives a single approximate equation for the proton charge, with no continuous adjustable or observed parameters, and with a solution that is within 8% of the observed value. Using this solution gives large logarithms for the age and size of the universe and for the cosmological constant that agree with the observed values within 17%.
Directory of Open Access Journals (Sweden)
Annegret Grimm
Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Directory of Open Access Journals (Sweden)
Mary Kathryn Abel
Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
, were investigated and our estimations were compared to the ICES advice. Only size-specific catch data were used, in order to emulate data limited situations. The simulation analysis reveals that the status of the stock, i.e. F/Fmsy, is estimated more accurately than the fishing mortality F itself....... Specific knowledge of the natural mortality improves the estimation more than having information about all other life history parameters. Our approach gives, at least qualitatively, an estimated stock status which is similar to the results of an age-based assessment. Since our approach only uses size...
Goertz, David E.; Frijlink, Martijn E.; de Jong, N.; van der Steen, A.F.W.
2006-01-01
An experimental lipid encapsulated contrast agent comprised substantially of micrometer to submicrometer diameter bubbles was evaluated for its capacity to produce nonlinear scattering in response to high transmit frequencies. Agent characterization experiments were conducted at transmit frequencies
Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. © The Author(s) 2015.
International Nuclear Information System (INIS)
Han, Young-Soo; Jang, Jin-Sung; Mao, Xiaodong
2015-01-01
Ferritic ODS(Oxide-dispersion-strengthened) alloy is known as a primary candidate material of the cladding tubes of a sodium fast reactor (SFR) in the Generation IV research program. In ODS alloy, the major contribution to the enhanced high-temperature mechanical property comes from the existence of nano-sized oxide precipitates, which act as obstacles to the movement of dislocations. In addition for the extremely high temperature application(>950 .deg. C) of future nuclear system, Ni base ODS alloys are considered as candidate materials. Therefore the characterization of nano-sized microstructures is important for determining the mechanical properties of the material. Small angle neutron scattering (SANS) technique non-destructively probes structures in materials at the nano-meter length of scale (1 - 1000 nm) and has been a very powerful tool in a variety of scientific/engineering research areas. In this study, nano-sized microstructures were quantitatively analyzed by small angle neutron scattering. Quantitative microstructural information on nanosized oxide in ODS alloys was obtained from SANS data. The effects of the thermo mechanical treatment on the size and volume fraction of nano-sized oxides were analyzed. For 12Cr ODS alloy, the experimental A-ratio is two-times larger than the theoretical A-ratio., and this result is considered to be due to the imperfections included in YTaO 4 . For Ni base ODS alloy, the volume fraction of the mid-sized particles (- 30 nm) increases rapidly as hot extrusion temperature decreases
Body size estimation of self and others in females varying in BMI.
Thaler, Anne; Geuss, Michael N; Mölbert, Simone C; Giel, Katrin E; Streuber, Stephan; Romero, Javier; Black, Michael J; Mohler, Betty J
2018-01-01
Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies.
Body size estimation of self and others in females varying in BMI.
Directory of Open Access Journals (Sweden)
Anne Thaler
Full Text Available Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI, on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1, but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a. The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies.
International Nuclear Information System (INIS)
Mullen, R.; Thompson, J.M.; Moussa, O.; Vinnicombe, S.; Evans, A.
2014-01-01
Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation
International Nuclear Information System (INIS)
Hoo, Christopher M.; Starostin, Natasha; West, Paul; Mecartney, Martha L.
2008-01-01
This paper compares the accuracy of conventional dynamic light scattering (DLS) and atomic force microscopy (AFM) for characterizing size distributions of polystyrene nanoparticles in the size range of 20-100 nm. Average DLS values for monosize dispersed particles are slightly higher than the nominal values whereas AFM values were slightly lower than nominal values. Bimodal distributions were easily identified with AFM, but DLS results were skewed toward larger particles. AFM characterization of nanoparticles using automated analysis software provides an accurate and rapid analysis for nanoparticle characterization and has advantages over DLS for non-monodispersed solutions.
DEFF Research Database (Denmark)
Clausen, Kevin Kuhlmann; Fælled, Casper Cæsar; Clausen, Preben
2013-01-01
The present study investigates the use of a mark–resight procedure to estimate total population size in a local goose population. Using colour-ring sightings of the increasingly scattered population of Light-bellied Brent Geese Branta bernicla hrota from their Danish staging areas, we estimate...... a total population size of 7845 birds (95% CI: 7252–8438). This is in good agreement with numbers obtained from total counts, emphasizing that this population, although steadily increasing, is still small compared with historic numbers....
Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution
Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.
2004-01-01
The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size
Impact of Base Functional Component Types on Software Functional Size based Effort Estimation
Gencel, Cigdem; Buglione, Luigi
2008-01-01
Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...
Association between inaccurate estimation of body size and obesity in schoolchildren
Directory of Open Access Journals (Sweden)
Larissa da Cunha Feio Costa
2015-12-01
Full Text Available Objectives: To investigate the prevalence of inaccurate estimation of own body size among Brazilian schoolchildren of both sexes aged 7-10 years, and to test whether overweight/obesity; excess body fat and central obesity are associated with inaccuracy. Methods: Accuracy of body size estimation was assessed using the Figure Rating Scale for Brazilian Children. Multinomial logistic regression was used to analyze associations. Results: The overall prevalence of inaccurate body size estimation was 76%, with 34% of the children underestimating their body size and 42% overestimating their body size. Obesity measured by body mass index was associated with underestimation of body size in both sexes, while central obesity was only associated with overestimation of body size among girls. Conclusions: The results of this study suggest there is a high prevalence of inaccurate body size estimation and that inaccurate estimation is associated with obesity. Accurate estimation of own body size is important among obese schoolchildren because it may be the first step towards adopting healthy lifestyle behaviors.
A consideration of Raman scattering in the estimation of the background in low energy TXRF
International Nuclear Information System (INIS)
Doi, M.; Shoji, T.; Yamada, T.; Wilson, R.
2000-01-01
Accurate estimation of the background in a TXRF spectrum is necessary for trace analysis. The tailing of large peaks in the spectrum is the main source of the background. Sum and escape peaks are also part of the background caused from an SSD detector. Estimation and subtraction of these peaks from the spectrum have been successful with sophisticated software. Raman scattering is another possible phenomenon that will give rise to a background peak in the spectrum. This paper explores this Raman phenomenon. We used the W-Mα line for the low energy TXRF experiments. The W-Mα is effective for exciting aluminum, magnesium and sodium atoms. The energy of the W-Mα line, 1.78 keV, is above and near the absorption edges of these elements and yet below the absorption edge of silicon, 1.84 keV. To obtain a monochromatic W-Mα line, we used a monochromator consisting of a total reflection mirror of silicon and a crystal of RAP(001). The reflectivity of this monochromator is smaller than that of a monochromator consisting of synthesized multilayers but the energy resolution is superior. We measured the spectra from a blank silicon wafer and a silicon wafer covered with a titanium layer. A peak caused by the elastic scattering of the incident W-Mα line is the main peak that appeared at 1.78 keV in each spectrum. There is another peak at 1.65 keV in the spectrum from the blank wafer. The ratio of the intensity of this peak to that of the main peak increases with the glancing angle. The peak at 1.65 keV does not appear in the spectrum taken from a silicon wafer covered with a titanium layer. There are no characteristic x-rays which have this same energy. Also, Compton scattering cannot account for a peak at that energy. We calculated energies of diffracted x-rays by the silicon crystal assuming that x-rays having a continuous spectrum are included in the incident x-rays. However, there are no diffracted x-rays which have an energy in this range. The binding energy of
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Energy Technology Data Exchange (ETDEWEB)
Viskari, T.
2012-07-01
Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden
Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc
2015-07-07
The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient
Energy Technology Data Exchange (ETDEWEB)
Kameya, Yuki, E-mail: ykameya@anl.gov; Lee, Kyeong O. [Argonne National Laboratory, Center for Transportation Research (United States)
2013-10-15
Regulations on particulate emissions from internal combustion engines tend to become more stringent, accordingly the importance of particulate filters in the after-treatment system has been increasing. In this work, the applicability of ultra-small-angle X-ray scattering (USAXS) to diesel soot cake and gasoline soot was investigated. Gasoline-direct-injection engine soot was collected at different fuel injection timings. The unified fits method was applied to analyze the resultant scattering curves. The validity of analysis was supported by comparing with carbon black and taking the sample images using a transmission electron microscope, which revealed that the primary particle size ranged from 20 to 55 nm. In addition, the effects of particle-packing conditions on the USAXS measurement were demonstrated by using samples suspended in acetone. Then, the investigation was extended to characterization of diesel soot cake deposited on a diesel particulate filter (DPF). Diesel soot was trapped on a small piece of DPF at different deposition conditions which were specified using the Peclet number. The dependence of scattering curve on soot-deposition conditions was demonstrated. To support the interpretation of the USAXS results, soot cake samples were observed using a scanning electron microscope and the influence of particle-packing conditions on scattering curve was discussed.
Kameya, Yuki; Lee, Kyeong O.
2013-10-01
Regulations on particulate emissions from internal combustion engines tend to become more stringent, accordingly the importance of particulate filters in the after-treatment system has been increasing. In this work, the applicability of ultra-small-angle X-ray scattering (USAXS) to diesel soot cake and gasoline soot was investigated. Gasoline-direct-injection engine soot was collected at different fuel injection timings. The unified fits method was applied to analyze the resultant scattering curves. The validity of analysis was supported by comparing with carbon black and taking the sample images using a transmission electron microscope, which revealed that the primary particle size ranged from 20 to 55 nm. In addition, the effects of particle-packing conditions on the USAXS measurement were demonstrated by using samples suspended in acetone. Then, the investigation was extended to characterization of diesel soot cake deposited on a diesel particulate filter (DPF). Diesel soot was trapped on a small piece of DPF at different deposition conditions which were specified using the Peclet number. The dependence of scattering curve on soot-deposition conditions was demonstrated. To support the interpretation of the USAXS results, soot cake samples were observed using a scanning electron microscope and the influence of particle-packing conditions on scattering curve was discussed.
International Nuclear Information System (INIS)
Kameya, Yuki; Lee, Kyeong O.
2013-01-01
Regulations on particulate emissions from internal combustion engines tend to become more stringent, accordingly the importance of particulate filters in the after-treatment system has been increasing. In this work, the applicability of ultra-small-angle X-ray scattering (USAXS) to diesel soot cake and gasoline soot was investigated. Gasoline-direct-injection engine soot was collected at different fuel injection timings. The unified fits method was applied to analyze the resultant scattering curves. The validity of analysis was supported by comparing with carbon black and taking the sample images using a transmission electron microscope, which revealed that the primary particle size ranged from 20 to 55 nm. In addition, the effects of particle-packing conditions on the USAXS measurement were demonstrated by using samples suspended in acetone. Then, the investigation was extended to characterization of diesel soot cake deposited on a diesel particulate filter (DPF). Diesel soot was trapped on a small piece of DPF at different deposition conditions which were specified using the Peclet number. The dependence of scattering curve on soot-deposition conditions was demonstrated. To support the interpretation of the USAXS results, soot cake samples were observed using a scanning electron microscope and the influence of particle-packing conditions on scattering curve was discussed
International Nuclear Information System (INIS)
Li Heng; Mohan, Radhe; Zhu, X Ronald
2008-01-01
The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.
Directory of Open Access Journals (Sweden)
Rui Zhang
2014-12-01
Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.
An estimation of the structure function xF3 in neutrino-proton scattering
International Nuclear Information System (INIS)
Aoki, Kenzaburo; Arimoto, Shinsuke; Hoshino, Shigetoshi; Itoh, Nobuhisa; Konno, Toshiharu.
1981-01-01
The structure function xF 3 (x, Q 2 ) in the deep-inelastic neutrino-proton scattering was estimated without differentiating with respect to Q 2 in the evolution function. At first, the moment of the non-singlet structure function xF 3 (x, Q 2 ) is defined. Then, the kernel function f(z, Q 2 ) is presented. Finally, the expression for the structure function xF 3 is given. The values of the structure function for various Q 2 are shown in five figures. A peak is seen in each figure, and the highest peak is at about Q 2 = 14GeV 2 . The analysis suggests very small value of xF 3 in small Q 2 region. The kernel function f(x/y, Q 2 ) may be interpreted as the probability of finding a quark of momentum fraction x arising from that of y is quantum chromodynamics. (Kato, T.)
International Nuclear Information System (INIS)
Ghosh, N.; Buddhiwant, P.; Uppal, A.; Majumder, S.K.; Patel, H.S.; Gupta, P.K.
2006-01-01
We present a fast and accurate approach for simultaneous determination of both the mean diameter and refractive index of a collection of red blood cells (RBCs). The approach uses the peak frequency of the power spectrum and the corresponding phase angle obtained by performing Fourier transform on the measured angular distribution of scattered light to determine these parameters. Results on the measurement of two important clinical parameters, the mean cell volume and mean cell hemoglobin concentration of a collection of RBCs, are presented
Revisit the spin-FET: Multiple reflection, inelastic scattering, and lateral size effects
Xu, Luting; Li, Xin-Qi; Sun, Qing-feng
2014-01-01
We revisit the spin-injected field effect transistor (spin-FET) by simulating a lattice model based on recursive lattice Green's function approach. In the one-dimensional case and coherent regime, the simulated results reveal noticeable differences from the celebrated Datta-Das model, which motivate thus an improved treatment and lead to analytic and generalized result. The simulation also allows us to address inelastic scattering (using B\\"uttiker's fictitious reservoir approach) and lateral...
Mean size estimation yields left-side bias: Role of attention on perceptual averaging.
Li, Kuei-An; Yeh, Su-Ling
2017-11-01
The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.
Can rarefaction be used to estimate song repertoire size in birds?
Directory of Open Access Journals (Sweden)
Kathleen R. PESHEK, Daniel T. BLUMSTEIN
2011-06-01
Full Text Available Song repertoire size is the number of distinct syllables, phrases, or song types produced by an individual or population. Repertoire size estimation is particularly difficult for species that produce highly variable songs and those that produce many song types. Estimating repertoire size is important for ecological and evolutionary studies of speciation, studies of sexual selection, as well as studies of how species may adapt their songs to various acoustic environments. There are several methods to estimate repertoire size, however prior studies discovered that all but a full numerical count of song types might have substantial inaccuracies associated with them. We evaluated a somewhat novel approach to estimate repertoire size—rarefaction; a technique ecologists use to measure species diversity on individual and population levels. Using the syllables within American robins’ Turdus migratorius repertoire, we compared the most commonly used techniques of estimating repertoires to the results of a rarefaction analysis. American robins have elaborate and unique songs with few syllables shared between individuals, and there is no evidence that robins mimic their neighbors. Thus, they are an ideal system in which to compare techniques. We found that the rarefaction technique results resembled that of the numerical count, and were better than two alternative methods (behavioral accumulation curves, and capture-recapture to estimate syllable repertoire size. Future estimates of repertoire size, particularly in vocally complex species, may benefit from using rarefaction techniques when numerical counts are unable to be performed [Current Zoology 57 (3: 300–306, 2011].
Estimating the size of non-observed economy in Croatia using the MIMIC approach
Directory of Open Access Journals (Sweden)
Vjekoslav Klarić
2011-03-01
Full Text Available This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the definitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the theory behind it, the MIMIC model is then applied to the Croatian economy. Considering the described characteristics of different methods, a previous estimate of the size of the non-observed economy in Croatia is chosen to provide benchmark values for the MIMIC model. Using those, the estimates of the size of non-observed economy in Croatia during the period 1998-2009 are obtained.
Estimation of mean grain size of seafloor sediments using neural network
Digital Repository Service at National Institute of Oceanography (India)
De, C.; Chakraborty, B.
The feasibility of an artificial neural network based approach is investigated to estimate the values of mean grain size of seafloor sediments using four dominant echo features, extracted from acoustic backscatter data. The acoustic backscatter data...
Estimating population sizes for elusive animals: the forest elephants of Kakum National Park, Ghana.
Eggert, L S; Eggert, J A; Woodruff, D S
2003-06-01
African forest elephants are difficult to observe in the dense vegetation, and previous studies have relied upon indirect methods to estimate population sizes. Using multilocus genotyping of noninvasively collected samples, we performed a genetic survey of the forest elephant population at Kakum National Park, Ghana. We estimated population size, sex ratio and genetic variability from our data, then combined this information with field observations to divide the population into age groups. Our population size estimate was very close to that obtained using dung counts, the most commonly used indirect method of estimating the population sizes of forest elephant populations. As their habitat is fragmented by expanding human populations, management will be increasingly important to the persistence of forest elephant populations. The data that can be obtained from noninvasively collected samples will help managers plan for the conservation of this keystone species.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Nuclear size comparison of even titanium isotopes using 140-MeV α-particle scattering
International Nuclear Information System (INIS)
Roberson, P.L.; Goldberg, D.A.; Wall, N.S.; Woo, L.W.; Chen, H.L.
1979-01-01
Systematic variations in nuclear-matter distributions have been determined by analyzing the measured elastic scattering of 140-MeV alpha particles from /sup 46,48,50/Ti. The ''unique'' optical potentials obtained (J/sub R//4A approx. = 300 MeV fm 3 , J/sub I//4A approx. = 100 MeV fm 3 ) indicate that isotopic differences occur primarily in the strength of the imaginary potential. The rms matter radii increase with A, in contrast to those of the charge distributions. The matter-radius results are in agreement with Hartree-Fock calculations
International Nuclear Information System (INIS)
Ghrayeb, R.; Purushotham, M.; Hou, M.; Bauer, E.
1987-01-01
Low-energy ion scattering is used in combination with computer simulation to study the interaction potential between 600-eV potassium ions and atoms in metallic surfaces. A special algorithm is described which is used with the computer simulation code marlowes. This algorithm builds up impact areas on the simulated solid surface from which scattering cross sections can be estimated with an accuracy better than 1%. This can be done by calculating no more than a couple of thousand trajectories. The screening length in the Moliere approximation to the Thomas-Fermi potential is fitted in such a way that the ratio between the calculated cross sections for double and single scattering matches the scattering intensity ratio measured experimentally and associated with the same mechanisms. The consistency of the method is checked by repeating the procedure for different incidence conditions and also by predicting the intensities associated with other surface scattering mechanisms. The screening length estimates are found to be insensitive to thermal vibrations. The calculated ratios between scattering cross sections by different processes are suggested to be sensitive enough to the relative atomic positions in order to be useful in surface-structure characterization
Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests
Directory of Open Access Journals (Sweden)
Bruno Giacomini Sari
2017-09-01
Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.
Chaudry, Beenish Moalla; Connelly, Kay; Siek, Katie A; Welch, Janet L
2013-12-01
Chronically ill people, especially those with low literacy skills, often have difficulty estimating portion sizes of liquids to help them stay within their recommended fluid limits. There is a plethora of mobile applications that can help people monitor their nutritional intake but unfortunately these applications require the user to have high literacy and numeracy skills for portion size recording. In this paper, we present two studies in which the low- and the high-fidelity versions of a portion size estimation interface, designed using the cognitive strategies adults employ for portion size estimation during diet recall studies, was evaluated by a chronically ill population with varying literacy skills. The low fidelity interface was evaluated by ten patients who were all able to accurately estimate portion sizes of various liquids with the interface. Eighteen participants did an in situ evaluation of the high-fidelity version incorporated in a diet and fluid monitoring mobile application for 6 weeks. Although the accuracy of the estimation cannot be confirmed in the second study but the participants who actively interacted with the interface showed better health outcomes by the end of the study. Based on these findings, we provide recommendations for designing the next iteration of an accurate and low literacy-accessible liquid portion size estimation mobile interface.
ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD
Directory of Open Access Journals (Sweden)
Yuri Gulbin
2011-05-01
Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
The international food unit: a new measurement aid that can improve portion size estimation.
Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M
2017-09-12
Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.
Short Communication Estimation of size at first maturity in two South ...
African Journals Online (AJOL)
Short Communication Estimation of size at first maturity in two South African coral species. ... African Journal of Marine Science ... PH Montoya-Maya, AHH Macdonald, MH Schleyer ... to differentiate juveniles from adult sizes of corals, an important factor for assessing the condition of scleractinian communities in reefs. Here ...
Moore, R. K.; Fung, A. K.; Dome, G. J.; Birrer, I. J.
1978-01-01
The wind direction properties of radar backscatter from the sea were empirically modelled using a cosine Fourier series through the 4th harmonic in wind direction (referenced to upwind). A comparison with 1975 JONSWAP (Joint North Sea Wave Project) scatterometer data, at incidence angles of 40 and 65, indicates that effects to third and fourth harmonics are negligible. Another important result is that the Fourier coefficients through the second harmonic are related to wind speed by a power law expression. A technique is also proposed to estimate the wind speed and direction over the ocean from two orthogonal scattering measurements. A comparison between two different types of sea scatter theories, one type presented by the work of Wright and the other by that of Chan and Fung, was made with recent scatterometer measurements. It demonstrates that a complete scattering model must include some provisions for the anisotropic characteristics of the sea scatter, and use a sea spectrum which depends upon wind speed.
DEFF Research Database (Denmark)
Jimenez Mena, Belen; Verrier, Etienne; Hospital, Frederic
an increase in the variability of values over time. The distance from the mean and the median to the true Ne increased over time too. This was caused by the fixation of alleles through time due to genetic drift and the changes in the distribution of allele frequencies. We compared the three estimators of Ne...
Pollack, J. B.; Cuzzi, J. N.
1978-01-01
Mie theory, which is generally used to describe the scattering behavior of particles at a certain wavelength, is only rigorously correct for spherical particles. Particles found as atmospheric constituents, with the exception of cloud droplets, are, however, decidedly nonspherical. An investigation is, therefore, conducted regarding the significant ways in which the scattering behavior of irregularly shaped particles differs from that of spheres. A systematic method is formulated for treating the real scalar scattering behavior. A description is presented of a new semiempirical theory based on simple physical principles and data obtained in laboratory measurements, which successfully reproduces the single scattering phase function for a wide range of particle shapes, sizes, and refractive indices.
Evidence of scattering effects on the sizes of interplanetary Type III radio bursts
Steinberg, J. L.; Hoang, S.; Dulk, G. A.
1985-01-01
An analysis is conducted of 162 interplanetary Type III radio bursts; some of these bursts have been observed in association with fast electrons and Langmuir wave events at 1 AU and, in addition, have been subjected to in situ plasma parameter measurements. It is noted that the sizes of burst sources are anomalously large, compared to what one would anticipate on the basis of the interplanetary plasma density distribution, and that the variation of source size with frequency, when compared with the plasma frequency variation measured in situ, implies that the source sizes expand with decreasing frequency to fill a cone whose apex is at the sun. It is also found that some local phenomenon near the earth controls the apparent size of low frequency Type III sources.
Modeling grain-size dependent bias in estimating forest area: a regional application
Daolan Zheng; Linda S. Heath; Mark J. Ducey
2008-01-01
A better understanding of scaling-up effects on estimating important landscape characteristics (e.g. forest percentage) is critical for improving ecological applications over large areas. This study illustrated effects of changing grain sizes on regional forest estimates in Minnesota, Wisconsin, and Michigan of the USA using 30-m land-cover maps (1992 and 2001)...
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and
A scattering methodology for droplet sizing of e-cigarette aerosols.
Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine
2016-10-01
Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.
Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.
Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo
2014-01-01
The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.
Gluttonous predators: how to estimate prey size when there are too many prey
Directory of Open Access Journals (Sweden)
MS. Araújo
Full Text Available Prey size is an important factor in food consumption. In studies of feeding ecology, prey items are usually measured individually using calipers or ocular micrometers. Among amphibians and reptiles, there are species that feed on large numbers of small prey items (e.g. ants, termites. This high intake makes it difficult to estimate prey size consumed by these animals. We addressed this problem by developing and evaluating a procedure for subsampling the stomach contents of such predators in order to estimate prey size. Specifically, we developed a protocol based on a bootstrap procedure to obtain a subsample with a precision error of at the most 5%, with a confidence level of at least 95%. This guideline should reduce the sampling effort and facilitate future studies on the feeding habits of amphibians and reptiles, and also provide a means of obtaining precise estimates of prey size.
Pham, Trang T. T.; Mathews, Nripan; Lam, Yeng-Ming; Mhaisalkar, Subodh
2018-03-01
Sub-micrometer cavities have been incorporated in the TiO2 photoanode of dye-sensitized solar cell to enhance its optical property with light scattering effect. These are large pores of several hundred nanometers in size and scatter incident light due to the difference refraction index between the scattering center and the surrounding materials, according to Mie theory. The pores are created using polystyrene (PS) or zinc oxide (ZnO) templates reported previously which resulted in ellipsoidal and spherical shapes, respectively. The effect of size and shape of scattering center was modeled using a numerical analysis finite-difference time-domain (FDTD). The scattering cross-section was not affected significantly with different shapes if the total displacement volume of the scattering center is comparable. Experiments were carried out to evaluate the optical property with varying size of ZnO templates. Photovoltaic effect of dye-sensitized solar cells made from these ZnO-assisted films were investigated with incident-photon-to-current efficiency to understand the effect of scattering center size on the enhancement of absorption. With 380 nm macropores incorporated, the power conversion efficiency has increased by 11% mostly thanks to the improved current density, while 170 nm and 500 nm macropores samples did not have increment in sufficiently wide range of absorbing wavelengths.
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
Energy Technology Data Exchange (ETDEWEB)
Monsefi, Farid [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås, Sweden and School of Innovation, Design and Engineering, IDT, Mälardalen University, MDH Väs (Sweden); Carlsson, Linus; Silvestrov, Sergei [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås (Sweden); Rančić, Milica [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås, Sweden and Department of Theoretical Electrical Engineering, Faculty of Electronic Engineering, University (Serbia); Otterskog, Magnus [School of Innovation, Design and Engineering, IDT, Mälardalen University, MDH Västerås (Sweden)
2014-12-10
To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.
International Nuclear Information System (INIS)
Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Rančić, Milica; Otterskog, Magnus
2014-01-01
To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm
Jonas, A. M.; Legras, R.; Ferain, E.
1998-03-01
Nanoporous track-etched membranes with narrow pore size distributions and average pore size diameters tunable from 100 to 1000 Åare produced by the chemical etching of latent tracks in polymer films after irradiation by a beam of accelerated heavy ions. Nanoporous membranes are used for highly demanding filtration purposes, or as templates to obtain metallic or polymeric nanowires (L. Piraux et al., Nucl. Instr. Meth. Phys. Res. 1997, B131, 357). Such applications call for developments in nanopore size characterization techniques. In this respect, we report on the characterization by small-angle X-ray scattering (SAXS) of nanopore size distribution (nPSD) in polycarbonate track-etched membranes. The obtention of nPSD requires inverting an ill-conditioned inhomogeneous equation. We present different numerical routes to overcome the amplification of experimental errors in the resulting solutions, including a regularization technique allowing to obtain the nPSD without a priori knowledge of its shape. The effect of deviations from cylindrical pore shape on the resulting distributions are analyzed. Finally, SAXS results are compared to results obtained by electron microscopy and conductometry.
Directory of Open Access Journals (Sweden)
Lixin Xia
2014-01-01
Full Text Available A well-designed type of micron-sized hollow silver sphere was successfully synthesized by a simple hard-template method to be used as substrates for surface-enhanced Raman scattering. 4 Å molecular sieves were employed as a removable solid template. [Ag(NH32]+ was absorbed as the precursor on the surface of the molecular sieve. Formaldehyde was selected as a reducing agent to reduce [Ag(NH32]+, resulting in the formation of a micron-sized silver shell on the surface of the 4 Å molecular sieves. The micron-sized hollow silver spheres were obtained by removing the molecular sieve template. SEM and XRD were used to characterize the structure of the micron-sized hollow silver spheres. The as-prepared micro-silver spheres exhibited robust SERS activity in the presence of adsorbed 4-mercaptobenzoic acid (4-MBA with excitation at 632.8 nm, and the enhancement factor reached ~1.5 × 106. This synthetic process represents a promising method for preparing various hollow metal nanoparticles.
A model to estimate the size of nanoparticle agglomerates in gas−solid fluidized beds
Energy Technology Data Exchange (ETDEWEB)
Martín, Lilian de, E-mail: L.DeMartinMonton@tudelft.nl; Ommen, J. Ruud van [Delft University of Technology, Department of Chemical Engineering (Netherlands)
2013-11-15
The estimation of nanoparticle agglomerates’ size in fluidized beds remains an open challenge, mainly due to the difficulty of characterizing the inter-agglomerate van der Waals force. The current approach is to describe micron-sized nanoparticle agglomerates as micron-sized particles with 0.1–0.2-μm asperities. This simplification does not capture the influence of the particle size on the van der Waals attraction between agglomerates. In this paper, we propose a new description where the agglomerates are micron-sized particles with nanoparticles on the surface, acting as asperities. As opposed to previous models, here the van der Waals force between agglomerates decreases with an increase in the particle size. We have also included an additional force due to the hydrogen bond formation between the surfaces of hydrophilic and dry nanoparticles. The average size of the fluidized agglomerates has been estimated equating the attractive force obtained from this method to the weight of the individual agglomerates. The results have been compared to 54 experimental values, most of them collected from the literature. Our model approximates without a systematic error the size of most of the nanopowders, both in conventional and centrifugal fluidized beds, outperforming current models. Although simple, the model is able to capture the influence of the nanoparticle size, particle density, and Hamaker coefficient on the inter-agglomerate forces.
A model to estimate the size of nanoparticle agglomerates in gas−solid fluidized beds
International Nuclear Information System (INIS)
Martín, Lilian de; Ommen, J. Ruud van
2013-01-01
The estimation of nanoparticle agglomerates’ size in fluidized beds remains an open challenge, mainly due to the difficulty of characterizing the inter-agglomerate van der Waals force. The current approach is to describe micron-sized nanoparticle agglomerates as micron-sized particles with 0.1–0.2-μm asperities. This simplification does not capture the influence of the particle size on the van der Waals attraction between agglomerates. In this paper, we propose a new description where the agglomerates are micron-sized particles with nanoparticles on the surface, acting as asperities. As opposed to previous models, here the van der Waals force between agglomerates decreases with an increase in the particle size. We have also included an additional force due to the hydrogen bond formation between the surfaces of hydrophilic and dry nanoparticles. The average size of the fluidized agglomerates has been estimated equating the attractive force obtained from this method to the weight of the individual agglomerates. The results have been compared to 54 experimental values, most of them collected from the literature. Our model approximates without a systematic error the size of most of the nanopowders, both in conventional and centrifugal fluidized beds, outperforming current models. Although simple, the model is able to capture the influence of the nanoparticle size, particle density, and Hamaker coefficient on the inter-agglomerate forces
Schrago, Carlos G
2014-08-01
Reliable estimates of ancestral effective population sizes are necessary to unveil the population-level phenomena that shaped the phylogeny and molecular evolution of the African great apes. Although several methods have previously been applied to infer ancestral effective population sizes, an analysis of the influence of the selective regime on the estimates of ancestral demography has not been thoroughly conducted. In this study, three independent data sets under different selective regimes were used were composed to tackle this issue. The results showed that selection had a significant impact on the estimates of ancestral effective population sizes of the African great apes. The inference of the ancestral demography of African great apes was affected by the selection regime. The effects, however, were not homogeneous along the ancestral populations of great apes. The effective population size of the ancestor of humans and chimpanzees was more impacted by the selection regime when compared to the same parameter in the ancestor of humans, chimpanzees and gorillas. Because the selection regime influenced the estimates of ancestral effective population size, it is reasonable to assume that a portion of the discrepancy found in previous studies that inferred the ancestral effective population size may be attributable to the differential action of selection on the genes sampled.
Optimum sample size to estimate mean parasite abundance in fish parasite surveys
Directory of Open Access Journals (Sweden)
Shvydka S.
2018-03-01
Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Collagen Orientation and Crystallite Size in Human Dentin: A Small Angle X-ray Scattering Study
Energy Technology Data Exchange (ETDEWEB)
Pople, John A
2001-03-29
The mechanical properties of dentin are largely determined by the intertubular dentin matrix, which is a complex composite of type I collagen fibers and a carbonate-rich apatite mineral phase. The authors perform a small angle x-ray scattering (SAXS) study on fully mineralized human dentin to quantify this fiber/mineral composite architecture from the nanoscopic through continuum length scales. The SAXS results were consistent with nucleation and growth of the apatite phase within periodic gaps in the collagen fibers. These mineralized fibers were perpendicular to the dentinal tubules and parallel with the mineralization growth front. Within the plane of the mineralization front, the mineralized collagen fibers were isotropic near the pulp, but became mildly anisotropic in the mid-dentin. Analysis of the data also indicated that near the pulp the mineral crystallites were approximately needle-like, and progressed to a more plate-like shape near the dentino-enamel junction. The thickness of these crystallites, {approx} 5 nm, did not vary significantly with position in the tooth. These results were considered within the context of dentinogenesis and maturation.
International Nuclear Information System (INIS)
Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P
2008-01-01
Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction
Energy Technology Data Exchange (ETDEWEB)
Larson, David B. [Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States)
2014-10-15
The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach. (orig.)
Directory of Open Access Journals (Sweden)
G. Zhao
2011-03-01
Full Text Available During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER, a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, the latest state-of-the-art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modelling behaviour was not well done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the raindrop and the diameter (mm of a raindrop: v(D = 4.67D^{0.53}. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R (Z_{H} is most sensitive to variations in DSD and the estimator R (K_{DP}, Z_{H}, Z_{DR} is the best estimator for estimating the rain rate. An X-band polarimetric radar (714XDP is used for verifying these estimators. The lowest sensitivity of the rain rate estimator R (K_{DP}, Z_{H}, Z_{DR} to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes K_{DP}, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross-section, which contributes to Z_{H}, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, K_{DP} is less sensitive to DSD variations than Z_{H}.
Greenbaum, Gili; Renan, Sharon; Templeton, Alan R; Bouskila, Amos; Saltz, David; Rubenstein, Daniel I; Bar-David, Shirli
2017-12-22
Effective population size, a central concept in conservation biology, is now routinely estimated from genetic surveys and can also be theoretically predicted from demographic, life-history, and mating-system data. By evaluating the consistency of theoretical predictions with empirically estimated effective size, insights can be gained regarding life-history characteristics and the relative impact of different life-history traits on genetic drift. These insights can be used to design and inform management strategies aimed at increasing effective population size. We demonstrated this approach by addressing the conservation of a reintroduced population of Asiatic wild ass (Equus hemionus). We estimated the variance effective size (N ev ) from genetic data (N ev =24.3) and formulated predictions for the impacts on N ev of demography, polygyny, female variance in lifetime reproductive success (RS), and heritability of female RS. By contrasting the genetic estimation with theoretical predictions, we found that polygyny was the strongest factor affecting genetic drift because only when accounting for polygyny were predictions consistent with the genetically measured N ev . The comparison of effective-size estimation and predictions indicated that 10.6% of the males mated per generation when heritability of female RS was unaccounted for (polygyny responsible for 81% decrease in N ev ) and 19.5% mated when female RS was accounted for (polygyny responsible for 67% decrease in N ev ). Heritability of female RS also affected N ev ; hf2=0.91 (heritability responsible for 41% decrease in N ev ). The low effective size is of concern, and we suggest that management actions focus on factors identified as strongly affecting Nev, namely, increasing the availability of artificial water sources to increase number of dominant males contributing to the gene pool. This approach, evaluating life-history hypotheses in light of their impact on effective population size, and contrasting
Beliciu, C M; Moraru, C I
2009-05-01
The objectives of this study were to investigate the effect of the solvent on the accuracy of casein micelle particle size determination by dynamic light scattering (DLS) at different temperatures and to establish a clear protocol for these measurements. Dynamic light scattering analyses were performed at 6, 20, and 50 degrees C using a 90Plus Nanoparticle Size Analyzer (Brookhaven Instruments, Holtsville, NY). Raw and pasteurized skim milk were used as sources of casein micelles. Simulated milk ultrafiltrate, ultrafiltered water, and permeate obtained by ultrafiltration of skim milk using a 10-kDa cutoff membrane were used as solvents. The pH, ionic concentration, refractive index, and viscosity of all solvents were determined. The solvents were evaluated by DLS to ensure that they did not have a significant influence on the results of the particle size measurements. Experimental protocols were developed for accurate measurement of particle sizes in all solvents and experimental conditions. All measurements had good reproducibility, with coefficients of variation below 5%. Both the solvent and the temperature had a significant effect on the measured effective diameter of the casein micelles. When ultrafiltered permeate was used as a solvent, the particle size and polydispersity of casein micelles decreased as temperature increased. The effective diameter of casein micelles from raw skim milk diluted with ultrafiltered permeate was 176.4 +/- 5.3 nm at 6 degrees C, 177.4 +/- 1.9 nm at 20 degrees C, and 137.3 +/- 2.7 nm at 50 degrees C. This trend was justified by the increased strength of hydrophobic bonds with increasing temperature. Overall, the results of this study suggest that the most suitable solvent for the DLS analyses of casein micelles was casein-depleted ultrafiltered permeate. Dilution with water led to micelle dissociation, which significantly affected the DLS measurements, especially at 6 and 20 degrees C. Simulated milk ultrafiltrate seemed to give
Using the ''Epiquant'' automatic analyzer for quantitative estimation of grain size
Energy Technology Data Exchange (ETDEWEB)
Tsivirko, E I; Ulitenko, A N; Stetsenko, I A; Burova, N M [Zaporozhskij Mashinostroitel' nyj Inst. (Ukrainian SSR)
1979-01-01
Application possibility of the ''Epiquant'' automatic analyzer to estimate qualitatively austenite grain in the 18Kh2N4VA steel has been investigated. Austenite grain has been clarified using the methods of cementation, oxidation and etching of the grain boundaries. Average linear size of grain at the length of 15 mm has been determined according to the total length of grain intersection line and the number of intersections at the boundaries. It is shown that the ''Epiquant'' analyzer ensures quantitative estimation of austenite grain size with relative error of 2-4 %.
Estimating the size of non-observed economy in Croatia using the MIMIC approach
Vjekoslav Klaric
2011-01-01
This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the defi nitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the ...
DEFF Research Database (Denmark)
Jimenez Mena, Belen
2016-01-01
Effective population size (Ne) is an important concept to understand the evolution of a population. In conservation, Ne is used to assess the threat status of a population, evaluate its genetic viability in the future and set conservation priorities. An accurate estimation of Ne is thus essential....... The main objective of this thesis was to better understand how the estimation of Ne using molecular markers can be improved for use in conservation genetics. As a first step, we undertook a simulation study where three different methods to estimate Ne were investigated. We explored how well these three...... methods performed under different scenarios. This study showed that all three methods performed better when the number of unlinked loci used to make the estimation increased and the minimum number of loci need for an accurate estimation of Ne was 100 SNPs. A general assumption in the estimation of Ne...
Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J
2012-02-07
Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
A simple method for estimating the size of nuclei on fractal surfaces
Zeng, Qiang
2017-10-01
Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Directory of Open Access Journals (Sweden)
G. Shimon
2015-09-01
Full Text Available A direct and systematic investigation of the magnetization dynamics in individual circular Ni80Fe20 disk of diameter (D in the range from 300 nm to 1 μm measured using micro-focused Brillouin Light Scattering (μ-BLS spectroscopy is presented. At high field, when the disks are in a single domain state, the resonance frequency of the uniform center mode is observed to reduce with reducing disk’s diameter. For D = 300 nm, additional edge and end-domains resonant modes are observed due to size effects. At low field, when the disks are in a vortex state, a systematic increase of resonant frequency of magnetostatic modes in a vortex state with the square root of the disks’ aspect ratio (thickness divided by radius is observed. Such dependence diminishes for disks with larger aspect ratio due to an increasing exchange energy contribution. Micromagnetic simulations are in excellent agreement with the experiments.
Identifying grain-size dependent errors on global forest area estimates and carbon studies
Daolan Zheng; Linda S. Heath; Mark J. Ducey
2008-01-01
Satellite-derived coarse-resolution data are typically used for conducting global analyses. But the forest areas estimated from coarse-resolution maps (e.g., 1 km) inevitably differ from a corresponding fine-resolution map (such as a 30-m map) that would be closer to ground truth. A better understanding of changes in grain size on area estimation will improve our...
International Nuclear Information System (INIS)
Hemdal, B.
2011-01-01
From major protocols on dosimetry in mammography, there is no doubt that the incident air kerma should be evaluated without backscattered radiation to the dosemeter. However, forward-scattered radiation from the compression paddle is neglected. The aim of this work was to analyse the contribution of forward-scattered radiation for typical air kerma measurements. Measurements of forward-scatter were performed with a plane-parallel ionisation chamber on four mammography units. The forward-scatter contribution to the air kerma was 2-10 % and increased with the compression paddle thickness, but also with the half-value layer value. For incident air kerma in mammography, it can be as important to consider forward scattered as backscattered radiation. If an ionisation chamber is used, the compression paddle should be in contact with the chamber; otherwise the air kerma and absorbed dose will be underestimated. If a dosemeter based on semiconductors with much less sensitivity to scattered radiation is used, it is suggested that a forward-scatter factor (FSF) is applied. Based on the results of this work, FSF=1.06 will lead to a maximum error of ∼4 %. (authors)
Williams, K.A.; Frederick, P.C.; Nichols, J.D.
2011-01-01
Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.
Takahashi, Kazuaki; Takahashi, Kaori
2013-06-10
Japanese black bears, a large-bodied omnivore, frequently create small gaps in the tree crown during fruit foraging. However, there are no previous reports of black bear-created canopy gaps. To characterize physical canopy disturbance by black bears, we examined a number of parameters, including the species of trees in which canopy gaps were created, gap size, the horizontal and vertical distribution of gaps, and the size of branches broken to create gaps. The size of black bear-created canopy gaps was estimated using data from branches that had been broken and dropped on the ground. The disturbance regime was characterized by a highly biased distribution of small canopy gaps on ridges, a large total overall gap area, a wide range in gap height relative to canopy height, and diversity in gap size. Surprisingly, the annual rate of bear-created canopy gap formation reached 141.3 m2 ha-1 yr-1 on ridges, which were hot spots in terms of black bear activity. This rate was approximately 6.6 times that of tree-fall gap formation on ridges at this study site. Furthermore, this rate was approximately two to three times that of common tree-fall gap formation in Japanese forests, as reported in other studies. Our findings suggest that the ecological interaction between black bears and fruit-bearing trees may create a unique light regime, distinct from that created by tree falls, which increases the availability of light resources to plants below the canopy.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Uncertainties in effective dose estimates of adult CT head scans: The effect of head size
International Nuclear Information System (INIS)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.
2009-01-01
Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.
Uncertainties in effective dose estimates of adult CT head scans: The effect of head size
Energy Technology Data Exchange (ETDEWEB)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E. [Department of Medical Physics, Royal Adelaide Hospital, Adelaide, South Australia 5000 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Division of Medical Imaging, Women' s and Children' s Hospital, North Adelaide, South Australia 5006 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia)
2009-09-15
Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.
McLaren, Alexander
2011-11-01
Due to their great ecological significance, mesopelagic fishes are attracting a wider audience on account of the large biomass they represent. Data from the National Marine Fisheries Service (NMFS) provided the opportunity to explore an unknown region of the North-West Atlantic, adjacent to one of the most productive fisheries in the world. Acoustic data collected during the cruise required the identification of acoustically distinct scattering types to make inferences on the migrations, distributions and biomass of mesopelagic scattering layers. Six scattering types were identified by the proposed method in our data and traces their migrations and distributions in the top 200m of the water column. This method was able to detect and trace the movements of three scattering types to 1000m depth, two of which can be further subdivided. This process of identification enabled the development of three physically-derived target-strength models adapted to traceable acoustic scattering types for the analysis of biomass and length distribution to 1000m depth. The abundance and distribution of acoustic targets varied closely in relation to varying physical environments associated with a warm core ring in the New England continental Shelf break region. The continental shelf break produces biomass density estimates that are twice as high as the warm core ring and the surrounding continental slope waters are an order of magnitude lower than either estimate. Biomass associated with distinct layers is assessed and any benefits brought about by upwelling at the edge of the warm core ring are shown not to result in higher abundance of deepwater species. Finally, asymmetric diurnal migrations in shelf break waters contrasts markedly with the symmetry of migrating layers within the warm ring, both in structure and density estimates, supporting a theory of predatorial and nutritional constraints to migrating pelagic species.
Directory of Open Access Journals (Sweden)
Yu. V. Golubenko
2014-01-01
Full Text Available Nanoparticles of metals possess a whole series of features, concerned with it’s sizes, this leads to appearing or unusual electromagnetic and optical properties, which are untypical for particulates.An extended method of receiving nanoparticles by means of laser radiation is pulse laser ablation of hard targets in liquid medium.Varying the parameters of laser radiation, such as wavelength of laser radiation, energy density, etc., we can operate the size and shape of the resultant particles.The greatest trend of application in medicine have the nanoparticles of iron, copper, silver, silicon, magnesium, gold and zinc.The subject matter in this work is nanoparticles of copper and gold, received by means of laser ablation of hard targets in liquid medium.The aim of exploration, represented in the article, is the estimation of application of the dynamic light scattering method for determination of the range of nanoparticles sizes in the colloidal solution.For studying of the laser ablation process was chosen the second harmonic of Nd:YAG laser with the wavelength of 532 nm. Special attention was spared for the description of the experiment technique of receiving of nanoparticles.As the liquid medium ethanol and distillation water were used.For exploration of the received colloidal system have been used the next methods: DLS, transmission electron microscopy (TEM and scanning electron microscopy (SEM.The results of measuring by DLS method showed that colloidal solution of the copper in the ethanol is the steady system. Copper nanoparticle’s size reaches 200 nm and is staying in the same size for some time.Received system from the gold’s nanoparticles is polydisperse, unsteady and has a big range of the nanoparticle’s sizes. This fact was confirmed by means of photos, got from the TEM FEI Tecnai G2F20 + GIF and SEM Helios NanoLab 660. The range of the gold nanoparticle’s sizes is from 5 to 60 nm. So, it has been proved that the DLS method is
International Nuclear Information System (INIS)
Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K
2009-01-01
We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations
Resonance estimates for single spin asymmetries in elastic electron-nucleon scattering
International Nuclear Information System (INIS)
Barbara Pasquini; Marc Vanderhaeghen
2004-01-01
We discuss the target and beam normal spin asymmetries in elastic electron-nucleon scattering which depend on the imaginary part of two-photon exchange processes between electron and nucleon. We express this imaginary part as a phase space integral over the doubly virtual Compton scattering tensor on the nucleon. We use unitarity to model the doubly virtual Compton scattering tensor in the resonance region in terms of γ* N → π N electroabsorption amplitudes. Taking those amplitudes from a phenomenological analysis of pion electroproduction observables, we present results for beam and target normal single spin asymmetries for elastic electron-nucleon scattering for beam energies below 1 GeV and in the 1-3 GeV region, where several experiments are performed or are in progress
International Nuclear Information System (INIS)
Baek, Ji Eun; Kim, Sung Hun; Lee, Ah Won
2014-01-01
Objective: To evaluate whether the degree of background parenchymal enhancement affects the accuracy of tumor size estimation based on breast MRI. Methods: Three hundred and twenty-two patients who had known breast cancer and underwent breast MRIs were recruited in our study. The total number of breast cancer cases was 339. All images were assessed retrospectively for the level of background parenchymal enhancement based on the BI-RADS criteria. Maximal lesion diameters were measured on the MRIs, and tumor types (mass vs. non-mass) were assessed. Tumor size differences between the MRI-based estimates and estimates based on pathological examinations were analyzed. The relationship between accuracy and tumor types and clinicopathologic features were also evaluated. Results: The cases included minimal (47.5%), mild (28.9%), moderate (12.4%) and marked background parenchymal enhancement (11.2%). The tumors of patients with minimal or mild background parenchymal enhancement were more accurately estimated than those of patients with moderate or marked enhancement (72.1% vs. 56.8%; p = 0.003). The tumors of women with mass type lesions were significantly more accurately estimated than those of the women with non-mass type lesions (81.6% vs. 28.6%; p < 0.001). The tumor of women negative for HER2 was more accurately estimated than those of women positive for HER2 (72.2% vs. 51.6%; p = 0.047). Conclusion: Moderate and marked background parenchymal enhancement is related to the inaccurate estimation of tumor size based on MRI. Non-mass type breast cancer and HER2-positive breast cancer are other factors that may cause inaccurate assessment of tumor size
Lee, Paul H
2016-08-01
This study aims to show that under several assumptions, in randomized controlled trials (RCTs), unadjusted, crude analysis will underestimate the Cohen's d effect size of the treatment, and an unbiased estimate of effect size can be obtained only by adjusting for all predictors of the outcome. Four simulations were performed to examine the effects of adjustment on the estimated effect size of the treatment and power of the analysis. In addition, we analyzed data from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study (older adults aged 65-94), an RCT with three treatment arms and one control arm. We showed that (1) the number of unadjusted covariates was associated with the effect size of the treatment; (2) the biasedness of effect size estimation was minimized if all covariates were adjusted for; (3) the power of the statistical analysis slightly decreased with the number of adjusted noise variables; and (4) exhaustively searching the covariates and noise variables adjusted for can lead to exaggeration of the true effect size. Analysis of the ACTIVE study data showed that the effect sizes adjusting for covariates of all three treatments were 7.39-24.70% larger than their unadjusted counterparts, whereas the effect size would be elevated by at most 57.92% by exhaustively searching the variables adjusted for. All covariates of the outcome in RCTs should be adjusted for, and if the effect of a particular variable on the outcome is unknown, adjustment will do more good than harm. Copyright © 2016 Elsevier Inc. All rights reserved.
Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy
2011-03-01
To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.
Results and evaluation of a survey to estimate Pacific walrus population size, 2006
Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley
2011-01-01
In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Estimating the size of the potential market for the Kyoto flexibility mechanisms
Zhang, Z.X.
2000-01-01
The Kyoto Protocol incorporates three flexibility mechanisms to help Annex I countries to meet their Kyoto targets at a lower overall cost. This paper aims to estimate the size of the potential market for all three mechanisms over the first commitment period. Based on the national communications
Estimating the size of the potential market for the Kyoto flexibility mechanisms
Zhang, Zhong Xiang
1999-01-01
The Kyoto Protocol incorporates emissions trading, joint implementation and the clean development mechanism to help Annex I countries to meet their Kyoto targets at a lower overall cost. This paper aims to estimate the size of the potential market for all three flexibility mechanisms under the Kyoto
M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples
2013-01-01
Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...
Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study
Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins
2016-01-01
Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions.Â In this study, two simulation scenarios were chosen to...
An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.
Capraro, Mary Margaret
This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…
B-graph sampling to estimate the size of a hidden population
Spreen, M.; Bogaerts, S.
2015-01-01
Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is
Replicon sizes in mammalian cells as estimated by an x-ray plus bromodeoxyuridine photolysis method
International Nuclear Information System (INIS)
Kapp, L.N.; Painter, R.B.
1978-01-01
A new method is described for estimating replicon sizes in mammalian cells. Cultures were pulse labeled with [ 3 H]thymidine ([ 3 H]TdR) and bromodeoxyuridine (BrDUrd) for up to 1 h. The lengths of the resulting labeled regions of DNA, L/sub obs/, were estimated by a technique wherein the change in molecular weight of nascent DNA strands, induced by 313 nm light, is measured by velocity sedimentation in alkaline sucrose gradients. If cells are exposed to 1,000 rads of x rays immediately before pulse labeling, initiation of replicon operation is blocked, although chain elongation proceeds almost normally. Under these conditions L/sub obs/ continues to increase only until operating replicons have completed their replication. This value for L/sub obs/ then remains constant as long as the block to initiation remains and represents an estimate for the average size of replicons operating in the cells before x irradiation. For human diploid fibroblasts and human HeLa cells this estimated average size is approximately 17 μM, whereas for Chinese hamster ovary cells, the average replicon size is about 42 μM
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Accounting for One-Group Clustering in Effect-Size Estimation
Citkowicz, Martyna; Hedges, Larry V.
2013-01-01
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data
Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.
International Nuclear Information System (INIS)
Bavio, José; Marrón, Beatriz
2014-01-01
Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given
Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á
2017-12-01
We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).
Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations
Charalambous, C.; Pike, W. T.; Golombek, M.
2017-12-01
We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides
Estimation of body-size traits by photogrammetry in large mammals to inform conservation.
Berger, Joel
2012-10-01
Photography, including remote imagery and camera traps, has contributed substantially to conservation. However, the potential to use photography to understand demography and inform policy is limited. To have practical value, remote assessments must be reasonably accurate and widely deployable. Prior efforts to develop noninvasive methods of estimating trait size have been motivated by a desire to answer evolutionary questions, measure physiological growth, or, in the case of illegal trade, assess economics of horn sizes; but rarely have such methods been directed at conservation. Here I demonstrate a simple, noninvasive photographic technique and address how knowledge of values of individual-specific metrics bears on conservation policy. I used 10 years of data on juvenile moose (Alces alces) to examine whether body size and probability of survival are positively correlated in cold climates. I investigated whether the presence of mothers improved juvenile survival. The posited latter relation is relevant to policy because harvest of adult females has been permitted in some Canadian and American jurisdictions under the assumption that probability of survival of young is independent of maternal presence. The accuracy of estimates of head sizes made from photographs exceeded 98%. The estimates revealed that overwinter juvenile survival had no relation to the juvenile's estimated mass (p < 0.64) and was more strongly associated with maternal presence (p < 0.02) than winter snow depth (p < 0.18). These findings highlight the effects on survival of a social dynamic (the mother-young association) rather than body size and suggest a change in harvest policy will increase survival. Furthermore, photographic imaging of growth of individual juvenile muskoxen (Ovibos moschatus) over 3 Arctic winters revealed annual variability in size, which supports the idea that noninvasive monitoring may allow one to detect how some environmental conditions ultimately affect body growth.
Directory of Open Access Journals (Sweden)
Steffen Oppel
2014-04-01
Full Text Available Population size assessments for nocturnal burrow-nesting seabirds are logistically challenging because these species are active in colonies only during darkness and often nest on remote islands where manual inspections of breeding burrows are not feasible. Many seabird species are highly vocal, and recent technological innovations now make it possible to record and quantify vocal activity in seabird colonies. Here we test the hypothesis that remotely recorded vocal activity in Cory’s shearwater (Calonectris borealis breeding colonies in the North Atlantic increases with nest density, and combined this relationship with cliff habitat mapping to estimate the population size of Cory’s shearwaters on the island of Corvo (Azores. We deployed acoustic recording devices in 9 Cory’s shearwater colonies of known size to establish a relationship between vocal activity and local nest density (slope = 1.07, R2 = 0.86, p < 0.001. We used this relationship to predict the nest density in various cliff habitat types and produced a habitat map of breeding cliffs to extrapolate nest density around the island of Corvo. The mean predicted nest density on Corvo ranged from 6.6 (2.1–16.2 to 27.8 (19.5–36.4 nests/ha. Extrapolation of habitat-specific nest densities across the cliff area of Corvo resulted in an estimate of 6326 Cory’s shearwater nests (95% confidence interval: 3735–10,524. This population size estimate is similar to previous assessments, but is too imprecise to detect moderate changes in population size over time. While estimating absolute population size from acoustic recordings may not be sufficiently precise, the strong positive relationship that we found between local nest density and recorded calling rate indicates that passive acoustic monitoring may be useful to document relative changes in seabird populations over time.
Schmidt, Sven; Schramm, Danilo; Ribbecke, Sebastian; Schulz, Ronald; Wittschieber, Daniel; Olze, Andreas; Vieth, Volker; Ramsthaler, H Frank; Pfischel, Klaus; Pfeiffer, Heidi; Geserick, Gunther; Schmeling, Andreas
2016-01-01
The dramatic rise in the number of refugees entering Germany means that age estimation for juveniles and young adults whose age is unclear but relevant to legal and official procedures has become more important than ever. Until now, whether and to what extent the combination of methods recommended by the Study Group on Forensic Age Diagnostics has resulted in a reduction of the range of scatter of the summarized age diagnosis has been unclear. Hand skeletal age, third molar mineralization stage and ossification stage of the medial clavicular epiphyses were determined for 307 individuals aged between 10 and 29 at time of death on whom autopsies were performed at the Institutes of Legal Medicine in Berlin, Frankfurt am Main and Hamburg between 2001 and 2011. To measure the range of scatter, linear regression analysis was used to calculate the standard error of estimate for each of the above methods individually and in combination. It was found that combining the above methods led to a reduction in the range of scatter. Due to various limitations of the study, the statistical parameters determined cannot, however, be used for age estimation practice.
International Nuclear Information System (INIS)
Walsh, Conor; Bows, Alice
2012-01-01
Highlights: ► Ship emission baselines can be used to inform studies but require prior knowledge. ► Region specific conditions alter average shipping emission factors. ► Region specific conditions are clearer when individual callings are examined. ► Relationship between ship size and emissions frustrates estimating mean emissions. -- Abstract: The decarbonisation agenda is placing increasing pressure on retailers to directly and indirectly influence greenhouse gas emissions associated with full supply chains. Transportation by sea is an important and significant element of these supply chains, yet the emissions associated with shipping, particularly international shipping, are often poorly accounted for. The magnitude of emissions embodied in a product is directly related to the distances involved in globalised product chains, where shipping can represent the most emission intensive stage per tonne of goods transported. Specifically, limited choice of ship type and size within assessment tools negates a fair estimate of product chain emissions. To address this, the correlation between ship emissions and size is quantified for a sample of United Kingdom (UK) port callings to estimate typical UK emission factors by ship type and size and to determine how well existing global data and available databases represent UK shipping activity. The results highlight that although ship type is a crucial determinant of emissions, vessel size is also important, particularly for smaller ships where the variance in emission factors is greatest. Existing, globally averaged data correlating ship size with emissions agree well with the UK data. However, the relatively higher proportion of smaller ships satisfying a UK demand for short sea shipping results in a skew towards higher typical emission factors, principally within the general cargo, product and chemical tanker categories. This bias is most visible when emissions per individual ship calling are estimated. Incorporating
Directory of Open Access Journals (Sweden)
Silje Steinsbekk
2017-11-01
Full Text Available Individuals who are overweight are more likely to underestimate their body size than those who are normal weight, and overweight underestimators are less likely to engage in weight loss efforts. Underestimation of body size might represent a barrier to prevention and treatment of overweight; thus insight in how underestimation of body size develops and tracks through the childhood years is needed. The aim of the present study was therefore to examine stability in children’s underestimation of body size, exploring predictors of underestimation over time. The prospective path from underestimation to BMI was also tested. In a Norwegian cohort of 6 year olds, followed up at ages 8 and 10 (analysis sample: n = 793 body size estimation was captured by the Children’s Body Image Scale, height and weight were measured and BMI calculated. Overall, children were more likely to underestimate than overestimate their body size. Individual stability in underestimation was modest, but significant. Higher BMI predicted future underestimation, even when previous underestimation was adjusted for, but there was no evidence for the opposite direction of influence. Boys were more likely than girls to underestimate their body size at ages 8 and 10 (age 8: 38.0% vs. 24.1%; Age 10: 57.9% vs. 30.8% and showed a steeper increase in underestimation with age compared to girls. In conclusion, the majority of 6, 8, and 10-year olds correctly estimate their body size (prevalence ranging from 40 to 70% depending on age and gender, although a substantial portion perceived themselves to be thinner than they actually were. Higher BMI forecasted future underestimation, but underestimation did not increase the risk for excessive weight gain in middle childhood.
International Nuclear Information System (INIS)
Capouchová, I.; Petr, J.; Marešová, D.
2003-01-01
The distribution of the size of wheat starch granules using the method LALLS (Low Angle Laser Light Scattering), followed by the evaluation of the effect of variety, experimental site and intensity of cultivation on the vol. % of the starch A (starch granules > 10 μm) was determined. The total starch content and crude protein content in dry matter of flour T530 in selected collection of five winter wheat varieties were determined. Vol. % of the starch A in evaluated collection of wheat varieties varied between 65.31 and 72.34%. The effect of a variety on the vol. % of starch A seemed to be more marked than the effect of site and intensity of cultivation. The highest vol. % of starch A reached evaluated varieties from the quality group C, i.e. varieties unsuitable for baking utilisation (except variety Contra with high total content of starch in dry matter of flour T530, but relatively low vol. % of starch A). A low vol. % of starch A was also found in the variety Hana (very good variety for baking utilisation). Certain variety differences followed from the evaluation of distribution of starch fractions of starch granules, forming starch A. In the case of varieties Hana, Contra and Siria higher representation of fractions up to 30 μm was recorded, while starch A in the varieties Estica and Versailles was formed in higher degree by size fractions of starch granules over 30 μm and particularly size fraction > 50 μm was greatest in these varieties of all evaluated samples. With increasing total starch content in dry matter of flour T530 the crude protein content decreased; the vol. % of starch A not always increased proportionally with increasing total starch content. (author)
Altschuler, Justin; Margolius, David; Bodenheimer, Thomas; Grumbach, Kevin
2012-01-01
PURPOSE Primary care faces the dilemma of excessive patient panel sizes in an environment of a primary care physician shortage. We aimed to estimate primary care panel sizes under different models of task delegation to nonphysician members of the primary care team. METHODS We used published estimates of the time it takes for a primary care physician to provide preventive, chronic, and acute care for a panel of 2,500 patients, and modeled how panel sizes would change if portions of preventive and chronic care services were delegated to nonphysician team members. RESULTS Using 3 assumptions about the degree of task delegation that could be achieved (77%, 60%, and 50% of preventive care, and 47%, 30%, and 25% of chronic care), we estimated that a primary care team could reasonably care for a panel of 1,947, 1,523, or 1,387 patients. CONCLUSIONS If portions of preventive and chronic care services are delegated to nonphysician team members, primary care practices can provide recommended preventive and chronic care with panel sizes that are achievable with the available primary care workforce.
Prasifka, Jarrad R; Lopez, Miriam D; Hellmich, Richard L; Prasifka, Patricia L
2008-01-01
Estimates of arthropod population size may paradoxically increase following insecticide applications. Research with ground beetles (Coleoptera: Carabidae) suggests that such unusual results reflect increased arthropod movement and capture in traps rather than real changes in population size. However, it is unclear whether direct (hyperactivity) or indirect (prey-mediated) mechanisms produce increased movement. Video tracking of Scarites quadriceps Chaudior indicated that brief exposure to lambda-cyhalothrin or tefluthrin increased total distance moved, maximum velocity and percentage of time moving. Repeated measurements on individual beetles indicated that movement decreased 240 min after initial lambda-cyhalothrin exposure, but increased again following a second exposure, suggesting hyperactivity could lead to increased trap captures in the field. Two field experiments in which ground beetles were collected after lambda-cyhalothrin or permethrin application attempted to detect increases in population size estimates as a result of hyperactivity. Field trials used mark-release-recapture methods in small plots and natural carabid populations in larger plots, but found no significant short-term (<6 day) increases in beetle trap captures. The disagreement between laboratory and field results suggests mechanisms other than hyperactivity may better explain unusual changes in population size estimates. When traps are used as a primary sampling tool, unexpected population-level effects should be interpreted carefully or with additional data less influenced by arthropod activity.
See food diet? Cultural differences in estimating fullness and intake as a function of plate size.
Peng, Mei; Adam, Sarah; Hautus, Michael J; Shin, Myoungju; Duizer, Lisa M; Yan, Huiquan
2017-10-01
Previous research has suggested that manipulations of plate size can have a direct impact on perception of food intake, measured by estimated fullness and intake. The present study, involving 570 individuals across Canada, China, Korea, and New Zealand, is the first empirical study to investigate cultural influences on perception of food portion as a function of plate size. The respondents viewed photographs of ten culturally diverse dishes presented on large (27 cm) and small (23 cm) plates, and then rated their estimated usual intake and expected fullness after consuming the dish, using 100-point visual analog scales. The data were analysed with a mixed-model ANCOVA controlling for individual BMI, liking and familiarity of the presented food. The results showed clear cultural differences: (1) manipulations of the plate size had no effect on the expected fullness or the estimated intake of the Chinese and Korean respondents, as opposed to significant effects in Canadians and New Zealanders (p Asian respondents. Overall, these findings, from a cultural perspective, support the notion that estimation of fullness and intake are learned through dining experiences, and highlight the importance of considering eating environments and contexts when assessing individual behaviours relating to food intake. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
M. P. Jarabo-Amores
2010-01-01
Full Text Available The existence of clutter in maritime radars deteriorates the estimation of some physical parameters of the objects detected over the sea surface. For that reason, maritime radars should incorporate efficient clutter reduction techniques. Due to the intrinsic nonlinear dynamic of sea clutter, nonlinear signal processing is needed, what can be achieved by artificial neural networks (ANNs. In this paper, an estimation of the ship size using an ANN-based clutter reduction system followed by a fixed threshold is proposed. High clutter reduction rates are achieved using 1-dimensional (horizontal or vertical integration modes, although inaccurate ship width estimations are achieved. These estimations are improved using a 2-dimensional (rhombus integration mode. The proposed system is compared with a CA-CFAR system, denoting a great performance improvement and a great robustness against changes in sea clutter conditions and ship parameters, independently of the direction of movement of the ocean waves and ships.
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
Han, Young-Soo; Mao, Xiadong; Jang, Jinsung
2013-11-01
The nano-sized microstructures in Fe-Cr oxide dispersion strengthened steel for Gen IV in-core applications were studied using small angle neutron scattering. The oxide dispersion strengthened steel was manufactured through hot isostatic pressing with various chemical compositions and fabrication conditions. Small angle neutron scattering experiments were performed using a 40 m small angle neutron scattering instrument at HANARO. Nano sized microstructures, namely, yttrium oxides and Cr-oxides were quantitatively analyzed by small angle neutron scattering. The yttrium oxides and Cr-oxides were also observed by transmission electron microscopy. The microstructural analysis results from small angle neutron scattering were compared with those obtained by transmission electron microscopy. The effects of the chemical compositions and fabrication conditions on the microstructure were investigated in relation to the quantitative microstructural analysis results obtained by small angle neutron scattering. The volume fraction of Y-oxide increases after fabrication, and this result is considered to be due to the formation of non-stochiometric Y-Ti-oxides.
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...
Directory of Open Access Journals (Sweden)
Gouveia Diego
2018-01-01
Full Text Available Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS. We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert
2018-04-01
Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
International Nuclear Information System (INIS)
Bourassa, A.E.; Degenstein, D.A.; Llewellyn, E.J.
2008-01-01
The inversion of satellite-based observations of limb scattered sunlight for the retrieval of constituent species requires an efficient and accurate modelling of the measurement. We present the development of the SASKTRAN radiative transfer model for the prediction of limb scatter measurements at optical wavelengths by method of successive orders along rays traced in a spherical atmosphere. The component of the signal due to the first two scattering events of the solar beam is accounted for directly along rays traced in the three-dimensional geometry. Simplifying assumptions in successive scattering orders provide computational optimizations without severely compromising the accuracy of the solution. SASKTRAN is designed for the analysis of measurements from the OSIRIS instrument and the implementation of the algorithm is efficient such that the code is suitable for the inversion of OSIRIS profiles on desktop computers. SASKTRAN total limb radiance profiles generally compare better with Monte-Carlo reference models over a large range of solar conditions than the approximate spherical and plane-parallel models typically used for inversions
Estimation of break location and size for loss of coolant accidents using neural networks
International Nuclear Information System (INIS)
Na, Man Gyun; Shin, Sun Ho; Jung, Dong Won; Kim, Soong Pyung; Jeong, Ji Hwan; Lee, Byung Chul
2004-01-01
In this work, a probabilistic neural network (PNN) that has been applied well to the classification problems is used in order to identify the break locations of loss of coolant accidents (LOCA) such as hot-leg, cold-leg and steam generator tubes. Also, a fuzzy neural network (FNN) is designed to estimate the break size. The inputs to PNN and FNN are time-integrated values obtained by integrating measurement signals during a short time interval after reactor scram. An automatic structure constructor for the fuzzy neural network automatically selects the input variables from the time-integrated values of many measured signals, and optimizes the number of rules and its related parameters. It is verified that the proposed algorithm identifies very well the break locations of LOCAs and also, estimate their break size accurately
Plot size recommendations for biomass estimation in a midwestern old-growth forest
Martin A. Spetich; George R Parker
1998-01-01
The authors examine the relationship between disturbance regime and plot size for woody biomass estimation in a midwestern old-growth deciduous forest from 1926 to 1992. Analysis was done on the core 19.6 ac of a 50.1 ac forest in which every tree 4 in. d.b.h. and greater has been tagged and mapped since 1926. Five windows of time are comparedâ1926, 1976, 1981, 1986...
A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy
International Nuclear Information System (INIS)
Wada, Ken; Hyodo, Toshio
2013-01-01
Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.
A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy
Wada, Ken; Hyodo, Toshio
2013-06-01
Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Department of Homeland Security — This report presents estimates of the size and characteristics of the resident nonimmigrant population in the United States. The estimates are daily averages for the...
Directory of Open Access Journals (Sweden)
John M Pleasants
Full Text Available To assess the change in the size of the eastern North American monarch butterfly summer population, studies have used long-term data sets of counts of adult butterflies or eggs per milkweed stem. Despite the observed decline in the monarch population as measured at overwintering sites in Mexico, these studies found no decline in summer counts in the Midwest, the core of the summer breeding range, leading to a suggestion that the cause of the monarch population decline is not the loss of Midwest agricultural milkweeds but increased mortality during the fall migration. Using these counts to estimate population size, however, does not account for the shift of monarch activity from agricultural fields to non-agricultural sites over the past 20 years, as a result of the loss of agricultural milkweeds due to the near-ubiquitous use of glyphosate herbicides. We present the counter-hypotheses that the proportion of the monarch population present in non-agricultural habitats, where counts are made, has increased and that counts reflect both population size and the proportion of the population observed. We use data on the historical change in the proportion of milkweeds, and thus monarch activity, in agricultural fields and non-agricultural habitats to show why using counts can produce misleading conclusions about population size. We then separate out the shifting proportion effect from the counts to estimate the population size and show that these corrected summer monarch counts show a decline over time and are correlated with the size of the overwintering population. In addition, we present evidence against the hypothesis of increased mortality during migration. The milkweed limitation hypothesis for monarch decline remains supported and conservation efforts focusing on adding milkweeds to the landscape in the summer breeding region have a sound scientific basis.
Hare, Matthew P; Nunney, Leonard; Schwartz, Michael K; Ruzzante, Daniel E; Burford, Martha; Waples, Robin S; Ruegg, Kristen; Palstra, Friso
2011-06-01
Effective population size (N(e)) determines the strength of genetic drift in a population and has long been recognized as an important parameter for evaluating conservation status and threats to genetic health of populations. Specifically, an estimate of N(e) is crucial to management because it integrates genetic effects with the life history of the species, allowing for predictions of a population's current and future viability. Nevertheless, compared with ecological and demographic parameters, N(e) has had limited influence on species management, beyond its application in very small populations. Recent developments have substantially improved N(e) estimation; however, some obstacles remain for the practical application of N(e) estimates. For example, the need to define the spatial and temporal scale of measurement makes the concept complex and sometimes difficult to interpret. We reviewed approaches to estimation of N(e) over both long-term and contemporary time frames, clarifying their interpretations with respect to local populations and the global metapopulation. We describe multiple experimental factors affecting robustness of contemporary N(e) estimates and suggest that different sampling designs can be combined to compare largely independent measures of N(e) for improved confidence in the result. Large populations with moderate gene flow pose the greatest challenges to robust estimation of contemporary N(e) and require careful consideration of sampling and analysis to minimize estimator bias. We emphasize the practical utility of estimating N(e) by highlighting its relevance to the adaptive potential of a population and describing applications in management of marine populations, where the focus is not always on critically endangered populations. Two cases discussed include the mechanisms generating N(e) estimates many orders of magnitude lower than census N in harvested marine fishes and the predicted reduction in N(e) from hatchery-based population
Ragagnin, Marilia Nagata; Gorman, Daniel; McCarthy, Ian Donald; Sant'Anna, Bruno Sampaio; de Castro, Cláudio Campi; Turra, Alexander
2018-01-11
Obtaining accurate and reproducible estimates of internal shell volume is a vital requirement for studies into the ecology of a range of shell-occupying organisms, including hermit crabs. Shell internal volume is usually estimated by filling the shell cavity with water or sand, however, there has been no systematic assessment of the reliability of these methods and moreover no comparison with modern alternatives, e.g., computed tomography (CT). This study undertakes the first assessment of the measurement reproducibility of three contrasting approaches across a spectrum of shell architectures and sizes. While our results suggested a certain level of variability inherent for all methods, we conclude that a single measure using sand/water is likely to be sufficient for the majority of studies. However, care must be taken as precision may decline with increasing shell size and structural complexity. CT provided less variation between repeat measures but volume estimates were consistently lower compared to sand/water and will need methodological improvements before it can be used as an alternative. CT indicated volume may be also underestimated using sand/water due to the presence of air spaces visible in filled shells scanned by CT. Lastly, we encourage authors to clearly describe how volume estimates were obtained.
Estimating the size of juvenile fish populations in southeastern coastal-plain estuaries
International Nuclear Information System (INIS)
Kjelson, M.A.
1977-01-01
Understanding the ecological significance of man's activities upon fishery resources requires information on the size of affected fish stocks. The objective of this paper is to provide information to evaluate and plan sampling programs designed to obtain accurate and precise estimates of fish abundance. Nursery habitats, as marsh--tidal creeks and submerged grass beds, offer the optimal conditions for estimating natural mortality rates for young-of-the-year fish in Atlantic and Gulf of Mexico coast estuaries. The area-density method of abundance estimation using quantitative gears is more feasible than either mark-recapture or direct-count techniques. The blockage method provides the most accurate estimates, while encircling devices enable highly mobile species found in open water to be captured. Drop nets and lift nets allow samples to be taken in obstructed sites, but trawls and seines are the most economical gears. Replicate samples are necessary to improve the precision of density estimates, while evaluation and use of gear-catch efficiencies is feasible and required to improve the accuracy of density estimates. Coefficients of variation for replicate trawl samples range from 50 to 150 percent, while catch efficiencies for both trawls and seines for many juvenile fishes range from approximately 30 to 70 percent
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Liu, Chao; Lee Panetta, R.; Yang, Ping
2013-01-01
Effects of surface roughness on the optical scattering properties of ice crystals are investigated using a random wave superposition model of roughness that is a simplification of models used in studies of scattering by surface water waves. Unlike previous work with models of rough surfaces applicable only in limited size ranges, such as surface perturbation methods in the small particle regime or the tilted-facet (TF) method in the large particle regime, ours uses a single roughness model to cover a range in sizes extending from the Rayleigh to the geometric optics regimes. The basic crystal shape we examine is the hexagonal column but our roughening model can be used for a wide variety of particle geometries. To compute scattering properties over the range of sizes we use the pseudo-spectral time domain method (PSTD) for small to moderate sized particles and the improved geometric optics method (IGOM) for large ones. Use of the PSTD with our roughness model is straightforward. By discretizing the roughened surface with triangular sub-elements, we adapt the IGOM to give full consideration of shadow effects, multiple reflections/refractions at the surface, and possible reentrance of the scattered beams. We measure the degree of roughness of a surface by the variance (σ 2 ) of surface slopes occurring on the surfaces. For moderately roughened surfaces (σ 2 ≤0.1) in the large particle regime, the scattering properties given by the TF and IGOM agree well, but differences in results obtained with the two methods become noticeable as the surface becomes increasingly roughened. Having a definite, albeit idealized, roughness model we are able to use the combination of the PSTD and IGOM to examine how a fixed degree of surface roughness affects the scattering properties of a particle as the size parameter of the particle changes. We find that for moderately rough surfaces in our model, as particle size parameter increases beyond about 20 the influence of surface
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sai Bharadwaj, P.; Kumar, Shashi; Kushwaha, S. P. S.; Bijker, Wietske
Forests are important biomes covering a major part of the vegetation on the Earth, and as such account for seventy percent of the carbon present in living beings. The value of a forest's above ground biomass (AGB) is considered as an important parameter for the estimation of global carbon content. In the present study, the quad-pol ALOS-PALSAR data was used for the estimation of AGB for the Dudhwa National Park, India. For this purpose, polarimetric decomposition components and an Extended Water Cloud Model (EWCM) were used. The PolSAR data orientation angle shifts were compensated for before the polarimetric decomposition. The scattering components obtained from the polarimetric decomposition were used in the Water Cloud Model (WCM). The WCM was extended for higher order interactions like double bounce scattering. The parameters of the EWCM were retrieved using the field measurements and the decomposition components. Finally, the relationship between the estimated AGB and measured AGB was assessed. The coefficient of determination (R2) and root mean square error (RMSE) were 0.4341 and 119 t/ha respectively.
Size and shape of soil humic acids estimated by viscosity and molecular weight.
Kawahigashi, Masayuki; Sumida, Hiroaki; Yamamoto, Kazuhiko
2005-04-15
Ultrafiltration fractions of three soil humic acids were characterized by viscometry and high performance size-exclusion chromatography (HPSEC) in order to estimate shapes and hydrodynamic sizes. Intrinsic viscosities under given solute/solvent/temperature conditions were obtained by extrapolating the concentration dependence of reduced viscosities to zero concentration. Molecular mass (weight average molecular weight (M (w)) and number average molecular weight (M (n))) and hydrodynamic radius (R(H)) were determined by HPSEC using pullulan as calibrant. Values of M (w) and M (n) ranged from 15 to 118 x 10(3) and from 9 to 50 x 10(3) (g mol(-1)), respectively. Polydispersity, as indicated by M (w)/M (n), increased with increasing filter size from 1.5 to 2.4. The hydrodynamic radii (R(H)) ranged between 2.2 and 6.4 nm. For each humic acid, M (w) and [eta] were related. Mark-Houwink coefficients calculated on the basis of the M (w)-[eta] relationships suggested restricted flexible chains for two of the humic acids and a branched structure for the third humic acid. Those structures probably behave as hydrated sphere colloids in a good solvent. Hydrodynamic radii of fractions calculated from [eta] using Einstein's equation, which is applicable to hydrated sphere colloids, ranged from 2.2 to 7.1 nm. These dimensions are fit to the size of nanospaces on and between clay minerals and micropores in soil particle aggregates. On the other hand, the good agreement of R(H) values obtained by applying Einstein's equation with those directly determined by HPSEC suggests that pullulan is a suitable calibrant for estimation of molecular mass and size of humic acids by HPSEC.
Grimm, Hans; Eatough, Delbert J
2009-01-01
The GRIMM model 1.107 monitor is designed to measure particle size distribution and particulate mass based on a light scattering measurement of individual particles in the sampled air. The design and operation of the instrument are described. Protocols used to convert the measured size number distribution to a mass concentration consistent with U.S. Environmental Protection Agency protocols for measuring particulate matter (PM) less than 10 microm (PM10) and less than 2.5 microm (PM2.5) in aerodynamic diameter are described. The performance of the resulting continuous monitor has been evaluated by comparing GRIMM monitor PM2.5 measurements with results obtained by the Rupprecht and Patashnick Co. (R&P) filter dynamic measurement system (FDMS). Data were obtained during month-long studies in Rubidoux, CA, in July 2003 and in Fresno, CA, in December 2003. The results indicate that the GRIMM monitor does respond to total PM2.5 mass, including the semi-volatile components, giving results comparable to the FDMS. The data also indicate that the monitor can be used to estimate water content of the fine particles. However, if the inlet to the monitor is heated, then the instrument measures only the nonvolatile material, more comparable to results obtained with a conventional heated filter tapered element oscillating microbalance (TEOM) monitor. A recent modification of the model 180, with a Nafion dryer at the inlet, measures total PM2.5 including the nonvolatile and semi-volatile components, but excluding fine particulate water. Model 180 was in agreement with FDMS data obtained in Lindon, UT, during January through February 2007.
GONe: Software for estimating effective population size in species with generational overlap
Coombs, J.A.; Letcher, B.H.; Nislow, K.H.
2012-01-01
GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.
A comparison study of size-specific dose estimate calculation methods
Energy Technology Data Exchange (ETDEWEB)
Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)
2018-01-15
The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide
A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies
Directory of Open Access Journals (Sweden)
Hojin Moon
2002-12-01
Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets
Directory of Open Access Journals (Sweden)
Leszek Klukowski
2017-01-01
Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract
Energy Technology Data Exchange (ETDEWEB)
Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung [Dept. of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Ji Eun [Biostatistics Collaboration Unit, Yonsei University College of Medicine, Seoul (Korea, Republic of)
2013-08-15
To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm{sup 3}). Anthropometric indices including height (cm), weight (kg) and body mass index (m{sup 2}/kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.
International Nuclear Information System (INIS)
Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung; Kim, Ji Eun
2013-01-01
To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm 3 ). Anthropometric indices including height (cm), weight (kg) and body mass index (m 2 /kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.
Andrew J. Dennhardt; Adam E. Duerr; David Brandes; Todd E. Katzner
2015-01-01
Estimating population size is fundamental to conservation and management. Population size is typically estimated using survey data, computer models, or both. Some of the most extensive and often least expensive survey data are those collected by citizen-scientists. A challenge to citizen-scientists is that the vagility of many organisms can complicate data collection....
Anzehaee, Mohammad Mousavi; Haeri, Mohammad
2011-07-01
New estimators are designed based on the modified force balance model to estimate the detaching droplet size, detached droplet size, and mean value of droplet detachment frequency in a gas metal arc welding process. The proper droplet size for the process to be in the projected spray transfer mode is determined based on the modified force balance model and the designed estimators. Finally, the droplet size and the melting rate are controlled using two proportional-integral (PI) controllers to achieve high weld quality by retaining the transfer mode and generating appropriate signals as inputs of the weld geometry control loop. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
An engineering method for estimating notch-size effect in fatigue tests on steel
Kuhn, Paul; Hardrath, Herbert F
1952-01-01
Neuber's proposed method of calculating a practical factor of stress concentration for parts containing notches of arbitrary size depends on the knowledge of a "new material constant" which can be established only indirectly. In this paper, the new constant has been evaluated for a large variety of steels from fatigue tests reported in the literature, attention being confined to stresses near the endurance limit. Reasonably satisfactory results were obtained with the assumption that the constant depends only on the tensile strength of the steel. Even in cases where the notches were cracks of which only the depth was known, reasonably satisfactory agreement was found between calculated and experimental factors. It is also shown that the material constant can be used in an empirical formula to estimate the size effect on unnotched specimens tested in bending fatigue.
CATCH ESTIMATION AND SIZE DISTRIBUTION OF BILLFISHES LANDED IN PORT OF BENOA, BALI
Directory of Open Access Journals (Sweden)
Bram Setyadji
2012-06-01
Full Text Available Billfishes are generally considered as by-product in tuna long line fisheries that have high economic value in the market. By far, the information about Indian Ocean billfish biology and fisheries especially in Indonesia is very limited. This research aimed to elucidate the estimation of production and size distribution of billfishes landed in port of Benoa during 2010 (February – December through daily observation at the processing plants. The result showed that the landings dominated by Swordfish (Xiphias gladius 54.9%, Blue marlin (Makaira mazara 17.8% and Black marlin (Makaira indica 13.0% respectively, followed by small amount of striped marlin (Tetrapturus audax, sailfish (Istiophorus platypterus, and shortbil spearfish (Tetrapturus Angustirostris. Generally the individual size of billfishes range between 68 and 206 cm (PFL, and showing negative allometric pattern except on swordfish that was isometric. Most of the billfish landed haven’t reached their first sexual maturity.
Brand, Andrew; Bradley, Michael T
2016-02-01
Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.
Estimating Effect Sizes and Expected Replication Probabilities from GWAS Summary Statistics
DEFF Research Database (Denmark)
Holland, Dominic; Wang, Yunpeng; Thompson, Wesley K
2016-01-01
Genome-wide Association Studies (GWAS) result in millions of summary statistics ("z-scores") for single nucleotide polymorphism (SNP) associations with phenotypes. These rich datasets afford deep insights into the nature and extent of genetic contributions to complex phenotypes such as psychiatric......-scores, as such knowledge would enhance causal SNP and gene discovery, help elucidate mechanistic pathways, and inform future study design. Here we present a parsimonious methodology for modeling effect sizes and replication probabilities, relying only on summary statistics from GWAS substudies, and a scheme allowing...... for estimating the degree of polygenicity of the phenotype and predicting the proportion of chip heritability explainable by genome-wide significant SNPs in future studies with larger sample sizes. We apply the model to recent GWAS of schizophrenia (N = 82,315) and putamen volume (N = 12,596), with approximately...
SpotCaliper: fast wavelet-based spot detection with accurate size estimation.
Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael
2016-04-15
SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Zhang, Song; Rajamani, Rajesh
2016-01-01
This paper develops analytical sensing principles for estimation of circumferential size of a cylindrical surface using magnetic sensors. An electromagnet and magnetic sensors are used on a wearable band for measurement of leg size. In order to enable robust size estimation during rough real-world use of the wearable band, three estimation algorithms are developed based on models of the magnetic field variation over a cylindrical surface. The magnetic field models developed include those for a dipole and for a uniformly magnetized cylinder. The estimation algorithms used include a linear regression equation, an extended Kalman filter and an unscented Kalman filter. Experimental laboratory tests show that the size sensor in general performs accurately, yielding sub-millimeter estimation errors. The unscented Kalman filter yields the best performance that is robust to bias and misalignment errors. The size sensor developed herein can be used for monitoring swelling due to fluid accumulation in the lower leg and a number of other biomedical applications. (paper)
International Nuclear Information System (INIS)
La Ville, J.L.; Bizard, G.; Durand, D.; Jin, G.M.; Rosato, E.
1990-06-01
Light fragment emission, when triggered by large transverse momentum protons shows specific kinematical correlations due to recoil effects of the excited emitting source. Such effects have been observed in azimuthal angular distributions of He-particles produced in collisions induced by 94 MeV/u 16 0 ions on Al, Ni and Au targets. A model calculation assuming a two-stage mechanism (formation and sequential decay of a hot source) gives a good description of these whole data. From this succesfull confrontation, it is possible to estimate the size of the emitting system
Brunner, Robert
2014-04-01
In a series of two contributions, decisive business-related aspects of the current process status to transfer research results on diffractive optical elements (DOEs) into commercial solutions are discussed. In part I, the focus was on the patent landscape. Here, in part II, market estimations concerning DOEs for selected applications are presented, comprising classical spectroscopic gratings, security features on banknotes, DOEs for high-end applications, e.g., for the semiconductor manufacturing market and diffractive intra-ocular lenses. The derived market sizes are referred to the optical elements, itself, rather than to the enabled instruments. The estimated market volumes are mainly addressed to scientifically and technologically oriented optical engineers to serve as a rough classification of the commercial dimensions of DOEs in the different market segments and do not claim to be exhaustive.
Loayza, Andrea P.; Squeo, Francisco A.
2016-01-01
Scatter-hoarding rodents can act as both predators and dispersers for many large-seeded plants because they cache seeds for future use, but occasionally forget them in sites with high survival and establishment probabilities. The most important fruit or seed trait influencing rodent foraging behavior is seed size; rodents prefer large seeds because they have higher nutritional content, but this preference can be counterbalanced by the higher costs of handling larger seeds. We designed a cafeteria experiment to assess whether fruit and seed size of Myrcianthes coquimbensis, an endangered desert shrub, influence the decision-making process during foraging by three species of scatter-hoarding rodents differing in body size: Abrothrix olivaceus, Phyllotis darwini and Octodon degus. We found that the size of fruits and seeds influenced foraging behavior in the three rodent species; the probability of a fruit being harvested and hoarded was higher for larger fruits than for smaller ones. Patterns of fruit size preference were not affected by rodent size; all species were able to hoard fruits within the entire range of sizes offered. Finally, fruit and seed size had no effect on the probability of seed predation, rodents typically ate only the fleshy pulp of the fruits offered and discarded whole, intact seeds. In conclusion, our results reveal that larger M. coquimbensis fruits have higher probabilities of being harvested, and ultimately of its seeds being hoarded and dispersed by scatter-hoarding rodents. As this plant has no other dispersers, rodents play an important role in its recruitment dynamics. PMID:27861550
The influence of body size on adult skeletal age estimation methods.
Merritt, Catherine E
2015-01-01
Accurate age estimations are essential to archaeological and forensic analyses. However, reliability for adult skeletal age estimations is poor, especially for individuals over the age of 40 years. This is the first study to show that body size influences skeletal age estimation. The İşcan et al., Lovejoy et al., Buckberry and Chamberlain, and Suchey-Brooks age methods were tested on 764 adult skeletons from the Hamann-Todd and William Bass Collections. Statures ranged from 1.30 to 1.93 m and body masses ranged from 24.0 to 99.8 kg. Transition analysis was used to evaluate the differences in the age estimations. For all four methods, the smallest individuals have the lowest ages at transition and the largest individuals have the highest ages at transition. Short and light individuals are consistently underaged, while tall and heavy individuals are consistently overaged. When femoral length and femoral head diameter are compared with the log-age model, results show the same trend as the known stature and body mass measurements. The skeletal remains of underweight individuals have fewer age markers while those of obese individuals have increased surface degeneration and osteophytic lipping. Tissue type and mechanical loading have been shown to affect bone turnover rates, and may explain the differing patterns of skeletal aging. From an archaeological perspective, the underaging of light, short individuals suggests the need to revisit the current research consensus on the young mortality rates of past populations. From a forensic perspective, understanding the influence of body size will impact efforts to identify victims of mass disasters, genocides, and homicides. © 2014 Wiley Periodicals, Inc.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Effect of CT image size and resolution on the accuracy of rock property estimates
Bazaikin, Y.; Gurevich, B.; Iglauer, S.; Khachkova, T.; Kolyukhin, D.; Lebedev, M.; Lisitsa, V.; Reshetova, G.
2017-05-01
In order to study the effect of the micro-CT scan resolution and size on the accuracy of upscaled digital rock property estimation of core samples Bentheimer sandstone images with the resolution varying from 0.9 μm to 24 μm are used. We statistically show that the correlation length of the pore-to-matrix distribution can be reliably determined for the images with the resolution finer than 9 voxels per correlation length and the representative volume for this property is about 153 correlation length. Similar resolution values for the statistically representative volume are also valid for the estimation of the total porosity, specific surface area, mean curvature, and topology of the pore space. Only the total porosity and the number of isolated pores are stably recovered, whereas geometry and the topological measures of the pore space are strongly affected by the resolution change. We also simulate fluid flow in the pore space and estimate permeability and tortuosity of the sample. The results demonstrate that the representative volume for the transport property calculation should be greater than 50 correlation lengths of pore-to-matrix distribution. On the other hand, permeability estimation based on the statistical analysis of equivalent realizations shows some weak influence of the resolution on the transport properties. The reason for this might be that the characteristic scale of the particular physical processes may affect the result stronger than the model (image) scale.
Rico, María; Andrés-Costa, María Jesús; Picó, Yolanda
2017-02-05
Wastewater can provide a wealth of epidemiologic data on common drugs consumed and on health and nutritional problems based on the biomarkers excreted into community sewage systems. One of the biggest uncertainties of these studies is the estimation of the number of inhabitants served by the treatment plants. Twelve human urine biomarkers -5-hydroxyindoleacetic acid (5-HIAA), acesulfame, atenolol, caffeine, carbamazepine, codeine, cotinine, creatinine, hydrochlorothiazide (HCTZ), naproxen, salicylic acid (SA) and hydroxycotinine (OHCOT)- were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to estimate population size. The results reveal that populations calculated from cotinine, 5-HIAA and caffeine are commonly in agreement with those calculated by the hydrochemical parameters. Creatinine is too unstable to be applicable. HCTZ, naproxen, codeine, OHCOT and carbamazepine, under or overestimate the population compared to the hydrochemical population estimates but showed constant results through the weekdays. The consumption of cannabis, cocaine, heroin and bufotenine in Valencia was estimated for a week using different population calculations. Copyright Â© 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Öztürk, Hakan
2014-01-01
Highlights: • The criticality problem for one-speed neutrons in homogeneous slab is investigated. • A combination of forward–backward and linear anisotropy is used. • The effect of the strongly anisotropic scattering on the critical size is analyzed. - Abstract: The criticality problem for one-speed neutrons in a uniform finite slab is studied in the case of a combination of forward and backward scattering with linearly anisotropic scattering using U N method based on the Chebyshev polynomials of second kind. The effect of the linear anisotropy on the critical thickness of the slab is investigated. The critical slab thicknesses are calculated by using Marshak boundary condition for various values of the anisotropy parameters and they are presented in the tables. In comparison to the results obtained by other methods, the results of this study are in compatible with the former ones
The Effects of Metal on Size Specific Dose Estimation (SSDE) in CT: A Phantom Study
Alsanea, Maram M.
Over the past number of years there has been a significant increase in the awareness of radiation dose from use of computed tomography (CT). Efforts have been made to reduce radiation dose from CT and to better quantify dose being delivered. However, unfortunately, these dose metrics such as CTDI vol are not a specific patient dose. In 2011, the size-specific dose estimation (SSDE) was introduced by AAPM TG-204 which accounts for the physical size of the patient. However, the approach presented in TG-204 ignores the importance of the attenuation differences in the body. In 2014, a newer methodology that accounted for tissue attenuation was introduced by the AAPM TG-220 based on the concept of water equivalent diameter, Dw. One of the limitation of TG-220 is that there is no estimation of the dose while highly attenuating objects such as metal is present in the body. The purpose of this research is to evaluate the accuracy of size-specific dose estimates in CT in the presence of simulated metal prostheses using a conventional PMMA CTDI phantom at different phantom diameter (body and head) and beam energy. Titanium, Cobalt- chromium and stainless steel alloys rods were used in the study. Two approaches were used as introduced by AAPM TG-204 and 220 utilizing the effective diameter and the Dw calculations. From these calculations, conversion factors have been derived that could be applied to the measured CTDIvol to convert it to specific patient dose, or size specific dose estimate, (SSDE). Radiation dose in tissue (f-factor = 0.94) was measured at various chamber positions with the presence of metal. Following, an average weighted tissue dose (AWTD) was calculated in a manner similar to the weighted CTDI (CTDIw). In general, for the 32 cm body phantom SSDE220 provided more accurate estimates of AWTD than did SSDE204. For smaller patient size, represented by the 16 cm head phantom, the SSDE204 was a more accurate estimate of AWTD that that of SSDE220. However, as the
Directory of Open Access Journals (Sweden)
Manan Gupta
Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates
Lai, Yu-Chi; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G
2004-10-01
Finite element method (FEM) analysis has become a common method to analyze the lesion formation during temperature-controlled radiofrequency (RF) cardiac ablation. We present a process of FEM modeling a system including blood, myocardium, and an ablation catheter with a thermistor embedded at the tip. The simulation used a simple proportional-integral (PI) controller to control the entire process operated in temperature-controlled mode. Several factors affect the lesion size such as target temperature, blood flow rate, and application time. We simulated the time response of RF ablation at different locations by using different target temperatures. The applied sites were divided into two groups each with a different convective heat transfer coefficient. The first group was high-flow such as the atrioventricular (AV) node and the atrial aspect of the AV annulus, and the other was low-flow such as beneath the valve or inside the coronary sinus. Results showed the change of lesion depth and lesion width with time, under different conditions. We collected data for all conditions and used it to create a database. We implemented a user-interface, the lesion size estimator, where the user enters set temperature and location. Based on the database, the software estimated lesion dimensions during different applied durations. This software could be used as a first-step predictor to help the electrophysiologist choose treatment parameters.
An evaluation of portion size estimation aids: Consumer perspectives on their effectiveness.
Faulkner, Gemma P; Livingstone, M Barbara E; Pourshahidi, L Kirsty; Spence, Michelle; Dean, Moira; O'Brien, Sinead; Gibney, Eileen R; Wallace, Julie M W; McCaffrey, Tracy A; Kerr, Maeve A
2017-07-01
This qualitative study aimed to investigate consumer opinions on the usefulness of portion size estimation aids (PSEA); consumer preferences in terms of format and context for use; and the level of detail of guidance considered necessary for the effective application of PSEA. Six focus groups (three to eight participants per group) were conducted to elicit views on PSEA. The discussions were recorded, transcribed verbatim and analysed by two independent researchers using a template approach. The focus groups were conducted in 2013 by an experienced moderator in various sites across the island of Ireland (three in the Republic of Ireland and three in Northern Ireland) including local leisure, community and resource centres; the home environment; and a university meeting room. General population, males (n = 17) and females (n = 15) aged 18-64 years old. Participants were recruited from both urban and rural locations representing a range of socio-economic groups. The majority of participants deemed the coloured portion pots and disposable plastic cup (household measures) to be useful particularly for the estimation of amorphous cereal products (e.g. breakfast cereals). Preferences were evident for "visual" PSEA (reference objects, household measures and food packaging) rather than 'quantities and measures' such as weighing in grams or ounces. Participants stated that PS education should be concise, consistent, from a reputable source, initiated at school age and communicated innovatively e.g. mobile app or TV advertisement. Guidance in relation to gender, age and activity level was favoured over a "one size fits all" approach. This study identified consumer preferences and acceptance of "visual" PSEA such as portion pots/cups to estimate appropriate PS of amorphous grain foods such as breakfast cereals, pasta and rice. Concise information from a reputable source in relation to gender, age and activity level should accompany PSEA. Copyright © 2017 Elsevier Ltd
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Satoshi Ezoe
Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.
Lee, Christina D; Chae, Junghoon; Schap, TusaRebecca E; Kerr, Deborah A; Delp, Edward J; Ebert, David S; Boushey, Carol J
2012-03-01
Diet is a critical element of diabetes self-management. An emerging area of research is the use of images for dietary records using mobile telephones with embedded cameras. These tools are being designed to reduce user burden and to improve accuracy of portion-size estimation through automation. The objectives of this study were to (1) assess the error of automatically determined portion weights compared to known portion weights of foods and (2) to compare the error between automation and human. Adolescents (n = 15) captured images of their eating occasions over a 24 h period. All foods and beverages served were weighed. Adolescents self-reported portion sizes for one meal. Image analysis was used to estimate portion weights. Data analysis compared known weights, automated weights, and self-reported portions. For the 19 foods, the mean ratio of automated weight estimate to known weight ranged from 0.89 to 4.61, and 9 foods were within 0.80 to 1.20. The largest error was for lettuce and the most accurate was strawberry jam. The children were fairly accurate with portion estimates for two foods (sausage links, toast) using one type of estimation aid and two foods (sausage links, scrambled eggs) using another aid. The automated method was fairly accurate for two foods (sausage links, jam); however, the 95% confidence intervals for the automated estimates were consistently narrower than human estimates. The ability of humans to estimate portion sizes of foods remains a problem and a perceived burden. Errors in automated portion-size estimation can be systematically addressed while minimizing the burden on people. Future applications that take over the burden of these processes may translate to better diabetes self-management. © 2012 Diabetes Technology Society.
Gregory, T Ryan; Nathwani, Paula; Bonnett, Tiffany R; Huber, Dezene P W
2013-09-01
A study was undertaken to evaluate both a pre-existing method and a newly proposed approach for the estimation of nuclear genome sizes in arthropods. First, concerns regarding the reliability of the well-established method of flow cytometry relating to impacts of rearing conditions on genome size estimates were examined. Contrary to previous reports, a more carefully controlled test found negligible environmental effects on genome size estimates in the fly Drosophila melanogaster. Second, a more recently touted method based on quantitative real-time PCR (qPCR) was examined in terms of ease of use, efficiency, and (most importantly) accuracy using four test species: the flies Drosophila melanogaster and Musca domestica and the beetles Tribolium castaneum and Dendroctonus ponderosa. The results of this analysis demonstrated that qPCR has the tendency to produce substantially different genome size estimates from other established techniques while also being far less efficient than existing methods.
Sobel Leonard, Ashley; Weissman, Daniel B; Greenbaum, Benjamin; Ghedin, Elodie; Koelle, Katia
2017-07-15
The bottleneck governing infectious disease transmission describes the size of the pathogen population transferred from the donor to the recipient host. Accurate quantification of the bottleneck size is particularly important for rapidly evolving pathogens such as influenza virus, as narrow bottlenecks reduce the amount of transferred viral genetic diversity and, thus, may decrease the rate of viral adaptation. Previous studies have estimated bottleneck sizes governing viral transmission by using statistical analyses of variants identified in pathogen sequencing data. These analyses, however, did not account for variant calling thresholds and stochastic viral replication dynamics within recipient hosts. Because these factors can skew bottleneck size estimates, we introduce a new method for inferring bottleneck sizes that accounts for these factors. Through the use of a simulated data set, we first show that our method, based on beta-binomial sampling, accurately recovers transmission bottleneck sizes, whereas other methods fail to do so. We then apply our method to a data set of influenza A virus (IAV) infections for which viral deep-sequencing data from transmission pairs are available. We find that the IAV transmission bottleneck size estimates in this study are highly variable across transmission pairs, while the mean bottleneck size of 196 virions is consistent with a previous estimate for this data set. Furthermore, regression analysis shows a positive association between estimated bottleneck size and donor infection severity, as measured by temperature. These results support findings from experimental transmission studies showing that bottleneck sizes across transmission events can be variable and influenced in part by epidemiological factors. IMPORTANCE The transmission bottleneck size describes the size of the pathogen population transferred from the donor to the recipient host and may affect the rate of pathogen adaptation within host populations. Recent
Auger, J.-C.; Fernandes, G. E.; Aptowicz, K. B.; Pan, Y.-L.; Chang, R. K.
2010-04-01
The relation between the surface roughness of aerosol particles and the appearance of island-like features in their angle-resolved elastic-light scattering patterns is investigated both experimentally and with numerical simulation. Elastic scattering patterns of polystyrene spheres, Bacillus subtilis spores and cells, and NaCl crystals are measured and statistical properties of the island-like intensity features in their patterns are presented. The island-like features for each class of particle are found to be similar; however, principal-component analysis applied to extracted features is able to differentiate between some of the particle classes. Numerically calculated scattering patterns of Chebyshev particles and aggregates of spheres are analyzed and show qualitative agreement with experimental results.
Risk and size estimation of debris flow caused by storm rainfall in mountain regions
Institute of Scientific and Technical Information of China (English)
CHENG; Genwei
2003-01-01
Debris flow is a common disaster in mountain regions. The valley slope, storm rainfall and amassed sand-rock materials in a watershed may influence the types of debris flow. The bursting of debris flow is not a pure random event. Field investigations show the periodicity of its burst, but no directive evidence has been found yet. A risk definition of debris flow is proposed here based upon the accumulation and the starting conditions of loose material in channel. According to this definition, the risk of debris flow is of quasi-periodicity. A formula of risk estimation is derived. Analysis of relative factors reveals the relationship between frequency and size of debris flow. For a debris flow creek, the longer the time interval between two occurrences of debris flows is, the bigger the bursting event will be.
Use of primary beam filtration in estimating mass attenuation coefficients by Compton scattering
International Nuclear Information System (INIS)
O'Connor, B.H.; Chang, W.J.
1985-01-01
Mass attenuation coefficients (MACs) are frequently estimated over a range of wavelengths in x-ray spectrometry from the intensity of the Compton peak I /SUB C/ associated with a prominent tube line. The MAC μ /SUB ll/ at wavelength lambda is estimated from the MAC at the Compton wavelength lambda /SUB C/ with the approximations μ /SUB ll/ α μ /SUB C/ and μ /SUB C/ α l/I /SUB C/ , Systematic errors may introduce absorption edge bias (AEB) effects into the results, caused by sample components with absorption edges between lambda /SUB C/ and lambda. A procedure is described which eliminates AEB effects by measuring I /SUB C/ using emission radiation from a primary beam filter
Nelson, M; Atkinson, M; Darbyshire, S
1996-07-01
The aim of the present study was to determine the errors in the conceptualization of portion size using photographs. Male and female volunteers aged 18-90 years (n 136) from a wide variety of social and occupational backgrounds completed 602 assessments of portion size in relation to food photographs. Subjects served themselves between four and six foods at one meal (breakfast, lunch or dinner). Portion sizes were weighed by the investigators at the time of serving, and any waste was weighed at the end of the meal. Within 5 min of the end of the meal, subjects were shown photographs depicting each of the foods just consumed. For each food there were eight photographs showing portion sizes in equal increments from the 5th to the 95th centile of the distribution of portion weights observed in The Dietary and Nutritional Survey of British Adults (Gregory et al. 1990). Subjects were asked to indicate on a visual analogue scale the size of the portion consumed in relation to the eight photographs. The nutrient contents of meals were estimated from food composition tables. There were large variations in the estimation of portion sizes from photographs. Butter and margarine portion sizes tended to be substantially overestimated. In general, small portion sizes tended to be overestimated, and large portion sizes underestimated. Older subjects overestimated portion size more often than younger subjects. Excluding butter and margarine, the nutrient content of meals based on estimated portion sizes was on average within +/- 7% of the nutrient content based on the amounts consumed, except for vitamin C (21% overestimate), and for subjects over 65 years (15-20% overestimate for energy and fat). In subjects whose BMI was less than 25 kg/m2, the energy and fat contents of meals calculated from food composition tables and based on estimated portion size (excluding butter and margarine) were 5-10% greater than the nutrient content calculated using actual portion size, but for those
Asian elephants in China: estimating population size and evaluating habitat suitability.
Directory of Open Access Journals (Sweden)
Li Zhang
Full Text Available We monitored the last remaining Asian elephant populations in China over the past decade. Using DNA tools and repeat genotyping, we estimated the population sizes from 654 dung samples collected from various areas. Combined with morphological individual identifications from over 6,300 elephant photographs taken in the wild, we estimated that the total Asian elephant population size in China is between 221 and 245. Population genetic structure and diversity were examined using a 556-bp fragment of mitochondrial DNA, and 24 unique haplotypes were detected from DNA analysis of 178 individuals. A phylogenetic analysis revealed two highly divergent clades of Asian elephants, α and β, present in Chinese populations. Four populations (Mengla, Shangyong, Mengyang, and Pu'Er carried mtDNA from the α clade, and only one population (Nangunhe carried mtDNA belonging to the β clade. Moreover, high genetic divergence was observed between the Nangunhe population and the other four populations; however, genetic diversity among the five populations was low, possibly due to limited gene flow because of habitat fragmentation. The expansion of rubber plantations, crop cultivation, and villages along rivers and roads had caused extensive degradation of natural forest in these areas. This had resulted in the loss and fragmentation of elephant habitats and had formed artificial barriers that inhibited elephant migration. Using Geographic Information System, Global Positioning System, and Remote Sensing technology, we found that the area occupied by rubber plantations, tea farms, and urban settlements had dramatically increased over the past 40 years, resulting in the loss and fragmentation of elephant habitats and forming artificial barriers that inhibit elephant migration. The restoration of ecological corridors to facilitate gene exchange among isolated elephant populations and the establishment of cross-boundary protected areas between China and Laos to secure
Doppler Spectrum-Based NRCS Estimation Method for Low-Scattering Areas in Ocean SAR Images
Directory of Open Access Journals (Sweden)
Hui Meng
2017-02-01
Full Text Available The image intensities of low-backscattering areas in synthetic aperture radar (SAR images are often seriously contaminated by the system noise floor and azimuthal ambiguity signal from adjacent high-backscattering areas. Hence, the image intensity of low-backscattering areas does not correctly reflect the backscattering intensity, which causes confusion in subsequent image processing or interpretation. In this paper, a method is proposed to estimate the normalized radar cross-section (NRCS of low-backscattering area by utilizing the differences between noise, azimuthal ambiguity, and signal in the Doppler frequency domain of single-look SAR images; the aim is to eliminate the effect of system noise and azimuthal ambiguity. Analysis shows that, for a spaceborne SAR with a noise equivalent sigma zero (NESZ of −25 dB and a single-look pixel of 8 m × 5 m, the NRCS-estimation precision of this method can reach −38 dB at a resolution of 96 m × 100 m. Three examples are given to validate the advantages of this method in estimating the low NRCS and the filtering of the azimuthal ambiguity.
A model of distributed phase aberration for deblurring phase estimated from scattering.
Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2010-01-01
Correction of aberration in ultrasound imaging uses the response of a point reflector or its equivalent to characterize the aberration. Because a point reflector is usually unavailable, its equivalent is obtained using statistical methods, such as processing reflections from multiple focal regions in a random medium. However, the validity of methods that use reflections from multiple points is limited to isoplanatic patches for which the aberration is essentially the same. In this study, aberration is modeled by an offset phase screen to relax the isoplanatic restriction. Methods are developed to determine the depth and phase of the screen and to use the model for compensation of aberration as the beam is steered. Use of the model to enhance the performance of the noted statistical estimation procedure is also described. Experimental results obtained with tissue-mimicking phantoms that implement different models and produce different amounts of aberration are presented to show the efficacy of these methods. The improvement in b-scan resolution realized with the model is illustrated. The results show that the isoplanatic patch assumption for estimation of aberration can be relaxed and that propagation-path characteristics and aberration estimation are closely related.
Directory of Open Access Journals (Sweden)
Xianglin Meng
2018-03-01
Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.
Hindman, N; Grande, P; Harrell, F E; Anderson, C; Harrison, D; Ideker, R E; Selvester, R H; Wagner, G S
1986-07-01
The extent of initial acute myocardial infarction (AMI) and subsequent patient prognosis were studied using 2 independent indicators of AMI size. Two inexpensive, readily available techniques, the complete Selvester QRS score from the standard 12-lead electrocardiogram and the peak value of the isoenzyme MB of creatine kinase (CK-MB), were evaluated in 125 patients with initial AMI. The overall correlation between peak CK-MB and QRS score was fair (0.57), with marked difference according to anterior (0.72) or inferior (0.35) location. The prognostic capabilities of each measurement varied. Peak CK-MB provided significant information concerning hospital morbidity or early mortality (within 30 days) for both anterior (chi 2 = 9.83) and inferior (chi 2 = 7.68) AMI locations; however, the QRS score was significant only for anterior AMI (chi 2 = 9.50). For total 24-month mortality, the QRS score alone provided the most information (chi 2 = 10.0, p = 0.0016), which was not improved with the addition of CK-MB (chi 2 = 0.07, p = 0.79). This study shows a good relation between these 2 independent estimates of AMI size for patients with anterior AMI location. Both QRS and CK-MB results are significantly related to early morbidity and mortality; however, only the QRS score is related to total 24-month prognosis.
ESTIMATING THE SIZE OF LATE VENEER IMPACTORS FROM IMPACT-INDUCED MIXING ON MERCURY
International Nuclear Information System (INIS)
Rivera-Valentin, E. G.; Barr, A. C.
2014-01-01
Late accretion of a ''veneer'' of compositionally diverse planetesimals may introduce chemical heterogeneity in the mantles of the terrestrial planets. The size of the late veneer objects is an important control on the angular momenta, eccentricities, and inclinations of the terrestrial planets, but current estimates range from meter-scale bodies to objects with diameters of thousands of kilometers. We use a three-dimensional global Monte Carlo model of impact cratering, excavation, and ejecta blanket formation to show that evidence of mantle heterogeneity can be preserved within ejecta blankets of mantle-exhuming impacts on terrestrial planets. Compositionally distinct provinces implanted at the time of the late veneer are most likely to be preserved in bodies whose subsequent geodynamical evolution is limited. Mercury may have avoided intensive mixing by solid-state convection during much of its history. Its subsequent bombardment may have then excavated evidence of primordial mantle heterogeneity introduced by the late veneer. Simple geometric arguments can predict the amount of mantle material in the ejecta blanket of mantle-exhuming impacts, and deviations in composition relative to geometric predictions can constrain the length-scale of chemical heterogeneities in the subsurface. A marked change in the relationship between mantle and ejecta composition occurs when chemically distinct provinces are ∼250 km in diameter; thus, evidence of bombardment by thousand-kilometer-sized objects should be readily apparent from the variation in compositions of ejecta blankets in Mercury's ancient cratered terrains
Estimates of the Size Distribution of Meteoric Smoke Particles From Rocket-Borne Impact Probes
Antonsen, Tarjei; Havnes, Ove; Mann, Ingrid
2017-11-01
Ice particles populating noctilucent clouds and being responsible for polar mesospheric summer echoes exist around the mesopause in the altitude range from 80 to 90 km during polar summer. The particles are observed when temperatures around the mesopause reach a minimum, and it is presumed that they consist of water ice with inclusions of smaller mesospheric smoke particles (MSPs). This work provides estimates of the mean size distribution of MSPs through analysis of collision fragments of the ice particles populating the mesospheric dust layers. We have analyzed data from two triplets of mechanically identical rocket probes, MUltiple Dust Detector (MUDD), which are Faraday bucket detectors with impact grids that partly fragments incoming ice particles. The MUDD probes were launched from Andøya Space Center (69°17'N, 16°1'E) on two payloads during the MAXIDUSTY campaign on 30 June and 8 July 2016, respectively. Our analysis shows that it is unlikely that ice particles produce significant current to the detector, and that MSPs dominate the recorded current. The size distributions obtained from these currents, which reflect the MSP sizes, are described by inverse power laws with exponents of k˜ [3.3 ± 0.7, 3.7 ± 0.5] and k˜ [3.6 ± 0.8, 4.4 ± 0.3] for the respective flights. We derived two k values for each flight depending on whether the charging probability is proportional to area or volume of fragments. We also confirm that MSPs are probably abundant inside mesospheric ice particles larger than a few nanometers, and the volume filling factor can be a few percent for reasonable assumptions of particle properties.
Estimating drizzle drop size and precipitation rate using two-colour lidar measurements
Directory of Open Access Journals (Sweden)
C. D. Westbrook
2010-06-01
Full Text Available A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio provides a measure of the median volume drop diameter D_{0}. This is a strong effect: for D_{0}=200 μm, a colour ratio of ≈6 dB is predicted. Once D_{0} is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC and other moments of the drizzle drop distribution.
The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m profiles of D_{0}, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here. Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to
Real-time, ray casting-based scatter dose estimation for c-arm x-ray system.
Alnewaini, Zaid; Langer, Eric; Schaber, Philipp; David, Matthias; Kretz, Dominik; Steil, Volker; Hesser, Jürgen
2017-03-01
usually detected was mainly from primary scattering (photons), whereas percentage differences between 2.8-20% are found on the side opposite to the x-ray source, where the lowest doses were detected. Dose calculation time of our approach was 0.85 seconds. The proposed approach yields a fast scatter dose estimation where we could run the Monte Carlo simulation only once for each x-ray tube angulation to get the Phase Space Files (PSF) for being used later by our ray casting approach to calculate the dose from only photons which will hit an movable elliptical cylinder shaped phantom and getting an output file for the positions of those hits to be used for visualizing the scatter dose propagation on the phantom surface. With dose calculation times of less than one second, we are saving much time compared to using a Monte Carlo simulation instead. With our approach, larger deviations occur only in regions with very low doses, whereas it provides a high precision in high-dose regions. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Consideration of the usefulness of a size-specific dose estimate in pediatric CT examination.
Tsujiguchi, Takakiyo; Obara, Hideki; Ono, Shuichi; Saito, Yoko; Kashiwakura, Ikuo
2018-04-05
Computed tomography (CT) has recently been utilized in various medical settings, and technological advances have resulted in its widespread use. However, medical radiation exposure associated with CT scans accounts for the largest share of examinations using radiation; thus, it is important to understand the organ dose and effective dose in detail. The CT dose index and dose-length product are used to evaluate the organ dose. However, evaluations using these indicators fail to consider the age and body type of patients. In this study, we evaluated the effective dose based on the CT examination data of 753 patients examined at our hospital using the size-specific dose estimate (SSDE) method, which can calculate the exposure dose with consideration of the physique of a patient. The results showed a large correlation between the SSDE conversion factor and physique, with a larger exposure dose in patients with a small physique when a single scan is considered. Especially for children, the SSDE conversion factor was found to be 2 or more. In addition, the patient exposed to the largest dose in this study was a 10-year-old, who received 40.4 mSv (five series/examination). In the future, for estimating exposure using the SSDE method and in cohort studies, the diagnostic reference level of SSDE should be determined and a low-exposure imaging protocol should be developed to predict the risk of CT exposure and to maintain the quality of diagnosis with better radiation protection of patients.
Directory of Open Access Journals (Sweden)
Csongor I. Gedeon
2017-08-01
Full Text Available Methods to estimate density of soil-dwelling arthropods efficiently, accurately and continuously are critical for investigating soil biological activity and evaluating soil management practices. Soil-dwelling arthropods are currently monitored manually. This method is invasive, and time- and labor-consuming. Here we describe an infrared opto-electronic sensor for detection of soil microarthropods in the size range of 0.4–10 mm. The sensor is built in a novel microarthropod trap designed for field conditions. It allows automated, on-line, in situ detection and body length estimation of soil microarthropods. In the opto-electronic sensor the light source is an infrared LED. Two plano-convex optical lenses are placed along the virtual optical axis. One lens on the receiver side is placed between the observation space at 0.5–1 times its focal length from the sensor, and another emitter side lens is placed between the observation space and the light source in the same way. This paper describes the setup and operating mechanism of the sensor and the control unit, and through basic tests it demonstrates its potential in automated detection of soil microarthropods. The sensor may be used for monitoring activities, especially for remote observation activities in soil and insect ecology or pest control.
Effects of sample size on estimation of rainfall extremes at high temperatures
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Effects of sample size on estimation of rainfall extremes at high temperatures
Directory of Open Access Journals (Sweden)
B. Boessenkool
2017-09-01
Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm
International Nuclear Information System (INIS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2015-01-01
The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed
International Nuclear Information System (INIS)
Di Cristo, M; Lin, C-L; Morassi, A; Rosset, E; Vessella, S; Wang, J-N
2013-01-01
We prove the upper and lower estimates of the area of an unknown elastic inclusion in a thin plate by one boundary measurement. The plate is made of non-homogeneous linearly elastic material belonging to a general class of anisotropy and the domain of the inclusion is a measurable subset of the plate. The size estimates are expressed in terms of the work exerted by a couple field applied at the boundary and of the induced transversal displacement and its normal derivative taken at the boundary of the plate. The main new mathematical tool is a doubling inequality for solutions to fourth-order elliptic equations whose principal part P(x, D) is the product of two second-order elliptic operators P 1 (x, D), P 2 (x, D) such that P 1 (0, D) = P 2 (0, D). The proof of the doubling inequality is based on the Carleman method, a sharp three-spheres inequality and a bootstrapping argument. (paper)
A reduced estimate of the number of kilometre-sized near-Earth asteroids.
Rabinowitz, D; Helin, E; Lawrence, K; Pravdo, S
2000-01-13
Near-Earth asteroids are small (diameters Earth (they come within 1.3 AU of the Sun). Most have a chance of approximately 0.5% of colliding with the Earth in the next million years. The total number of such bodies with diameters > 1 km has been estimated to be in the range 1,000-2,000, which translates to an approximately 1% chance of a catastrophic collision with the Earth in the next millennium. These numbers are, however, poorly constrained because of the limitations of previous searches using photographic plates. (One kilometre is below the size of a body whose impact on the Earth would produce global effects.) Here we report an analysis of our survey for near-Earth asteroids that uses improved detection technologies. We find that the total number of asteroids with diameters > 1 km is about half the earlier estimates. At the current rate of discovery of near-Earth asteroids, 90% will probably have been detected within the next 20 years.
Directory of Open Access Journals (Sweden)
Jana Menegassi del Favero
2015-06-01
Full Text Available Abstract Studies of ichthyoplankton retention by nets of different mesh sizes are important because they help in choosing a sampler when planning collection and the establishment of correction factors. These factors make it possible to compare studies performed with nets of different mesh sizes. In most studies of mesh retention of fish eggs, the taxonomic identification is done at the family level, resulting in the loss of detailed information. We separated Engraulidae eggs, obtained with 0.333 mm and 0.505 mm mesh bongo nets at 172 oceanographic stations in the southeastern Brazilian Bight, into four groups based on their morphometric characteristics. The difference in the abundance of eggs caught by the two nets was not significant for those groups with highest volume, types A and B, but in type C (Engraulis anchoita, the most eccentric, and in type D, of the smallest volume, the difference was significant. However, no significant difference was observed in the egg size sampled with each net for E. anchoita and type D, which exhibited higher abundance in the 0.333 mm mesh net and minor axis varying from 0.45-0.71 mm, smaller than the 0.505 mm mesh aperture and the mesh diagonal.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
The estimation of total body fat by inelastic neutron scattering - a geometrical feasibility study
International Nuclear Information System (INIS)
Lizos, F.; Kotzasarlidoou, M.; Makridou, A.; Giannopoulou, K.
2012-01-01
A rough quantitative representation of the basic elements in a human body is shown. It deals with a hypothetical, normal adult weighting 70 kg. It is possible to measure two basic quantities, the FFM, standing for Fat Free Mass and the FM, standing for Fat Mass. The present simulation deals with the most important aspect of the estimation of storage fat in the human body and in order to accomplish such a task, it is considered a representation of the human body, containing a uniform distribution of triacylglycerols, in a shape of cylindrical phantom. The whole process is analyzed and simulated by a geometrical model and with the aid of a computer program which takes into consideration the different attenuation for neutrons and photons, the amount of gamma radiation reaching the detector is also calculated. The net result is the determination of sensitivity for a particular set-up and by relating the out coming data to the amount of carbon; the quantity of fat is estimated. In addition, the non-uniformity is calculated, from the computer programs expressing the consistency of the system. In order to determine the storage fat, a simulation model that will enable to represent the detection of the carbon atoms in triacylglycerols was built
Lalam, N.; Jacob, C.; Jagers, P.
2004-01-01
We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé-Galton-Watson transient phase and then having a saturation near-critical size-dependent phase. This model allows us to estimate the probability of replication
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
DEFF Research Database (Denmark)
Freltoft, T.; Kjems, Jørgen; Sinha, S. K.
1986-01-01
Small-angle neutron scattering from normal, compressed, and water-suspended powders of aggregates of fine silica particles has been studied. The samples possessed average densities ranging from 0.008 to 0.45 g/cm3. Assuming power-law correlations between particles and a finite correlation length ξ......, the authors derive the scattering function S(q) from specific models for particle-particle correlation in these systems. S(q) was found to provide a satisfactory fit to the data for all samples studied. The fractal dimension df corresponding to the power-law correlation was 2.61±0.1 for all dry samples, and 2...
Energy Technology Data Exchange (ETDEWEB)
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.
International Nuclear Information System (INIS)
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-01-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom
Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M
2013-02-01
Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.
Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon
2015-04-01
Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the
Size estimation, HIV prevalence and risk behaviours of female sex workers in Pakistan
International Nuclear Information System (INIS)
Altaf, A.; Aga, A.; McKinizie, M.H.; Abbas, Q.; Jafri, S.B.
2012-01-01
Objective: To provide size estimation and to determine risky behaviours and HIV prevalence among female sex workers in Pakistan, which has progressed from a low to concentrated level of HIV epidemic. Methods: A cross-sectional study (geographic mapping and integrated behavioural and biological survey-IBBS) was conducted between August 2005 to January 2006 in Karachi, Hyderabad and Sukkur. A detailed questionnaire and dry blood spot (DBS) specimen for HIV testing were collected by trained interviewers after informed consent. The study was ethically approved by review boards in Canada and Pakistan. Results: About 14,900 female sex workers were estimated to be functional in Sindh. A total of 1158 of them were interviewed for the study. Average age of sex workers was 27.4+- 6.7 years, and the majority 787 (67.9%) were married, and uneducated 764 (65.9%). Sindhi (26.4%) was the predominant ethnicity. Mean number of paid clients was 2.1+-1.2. Three workers were confirmed HIV positive (0.75%, 95 percent CI 0.2-2.2%) from Karachi. Condom use at last sexual act was highest (68%) among brothel-based workers from Karachi, and the lowest in Sukkur where only 1.3% street-based workers reported using a condom at last sexual act. Overall use of illicit drugs through injections was negligible. Conclusion: HIV prevalence among female sex workers in Sindh, Pakistan is low but risky behaviours are present. Well organised service delivery programmes can help promoting safer practices. (author)
Iglesias, Roberto Magno; Szklo, André Salem; Souza, Mirian Carvalho de; de Almeida, Liz Maria
2017-01-01
Brazil experienced a large decline in smoking prevalence between 2008 and 2013. Tax rate increases since 2007 and a new tobacco tax structure in 2012 may have played an important role in this decline. However, continuous tax rate increases pushed up cigarette prices over personal income growth and, therefore, some consumers, especially lower income individuals, may have migrated to cheaper illicit cigarettes. To use tobacco surveillance data to estimate the size of illicit tobacco consumption before and after excise tax increases. We defined a threshold price and compared it with purchasing prices obtained from two representative surveys conducted in 2008 and 2013 to estimate the proportion of illicit cigarette use among daily smokers. Generalised linear model was specified to understand whether the absolute difference in proportions over time differed by sociodemographic groups and consumption levels. Our findings were validated using an alternative method. Total proportion of illicit daily consumption increased from 16.6% to 31.1% between 2008 and 2013. We observed a pattern of unadjusted absolute decreases in cigarette smoking prevalence and increases in the proportion of illicit consumption, irrespective of gender, age, educational level, area of residence and amount of cigarettes consumed. The strategy of raising taxes has increased government revenues, reduced smoking prevalence and resulted in an increased illicit trade. Surveillance data can be used to provide information on illicit tobacco trade to help in the implementation of WHO Framework Convention on Tobacco Control (FCTC) article 15 and the FCTC Protocol to Eliminate Illicit Trade in Tobacco Products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Directory of Open Access Journals (Sweden)
Adriana Bruscato Bortoluzzo
2011-01-01
Full Text Available The objective of this article is to estimate insurance claims from an auto dataset using the Tweedie and zero-adjusted inverse Gaussian (ZAIG methods. We identify factors that influence claim size and probability, and compare the results of these methods which both forecast outcomes accurately. Vehicle characteristics like territory, age, origin and type distinctly influence claim size and probability. This distinct impact is not always present in the Tweedie estimated model. Auto insurers should consider estimating total claim size using both the Tweedie and ZAIG methods. This allows for an estimation of confidence interval based on empirical quantiles using bootstrap simulation. Furthermore, the fitted models may be useful in developing a strategy to obtain premium pricing.
Di Biagio, C.; Formenti, P.; Caponi, L.; Cazaunau, M.; Pangui, E.; Journet, E.; Nowak, S.; Caquineau, S.; Andreae, M. O.; Kandler, K.; Saeed, T.; Piketh, S.; Seibert, D.; Williams, E.; Balkanski, Y.; Doussin, J. F.
2017-12-01
Mineral dust is one of the most abundant aerosol species in the atmosphere and strongly contributes to the global and regional direct radiative effect. Still large uncertainties persist on the magnitude and overall sign of the dust direct effect, where indeed one of the main unknowns is how much mineral dust absorbs light in the shortwave (SW) spectral range. Aerosol absorption is represented both by the imaginary part (k) of the complex refractive index or the single scattering albedo (SSA, i.e. the ratio of the scattering to extinction coefficient). In this study we present a new dataset of SW complex refractive indices and SSA for mineral dust aerosols obtained from in situ measurements in the 4.2 m3 CESAM simulation chamber at LISA (Laboratoire Interuniversitaire des Systemes Atmospheriques) in Créteil, France. Investigated dust aerosol samples were issued from major desert sources worldwide, including the African Sahara and Sahel, Eastern Asia, the Middle East, Southern Africa, Australia, and the Americas, with differing iron oxides content. Results from the present study provide a regional mapping of the SW absorption by dust and show that the imaginary part of the refractive index largely varies (by up to a factor 6, 0.003-0.02 at 370 nm and 0.001-0.003 at 950 nm) for the different source areas due to the change in the particle iron oxide content. The SSA for dust varies between 0.75-0.90 at 370 nm and 0.95-0.99 at 950 nm, with the largest absorption observed for Sahelian and Australian dust aerosols. Our range of variability for k and SSA is well bracketed by already published literature estimates, but suggests that regional‒dependent values should be used in models. The possible relationship between k and the dust iron oxides content is investigated with the aim of providing a parameterization of the regional‒dependent dust absorption to include in climate models.
Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.
2018-06-01
We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.
Ferrare, R. A.; Melfi, S. H.; Whiteman, D. N.; Evans, K. D.; Poellot, M.; Kaufman, Y. J.
1998-01-01
Aerosol backscattering and extinction profiles measured by the NASA Goddard Space Flight Center Scanning Raman Lidar (SRL) during the remote cloud sensing (RCS) intensive operations period (IOP) at the Department of Energy Atmospheric Radiation Measurement (ARM) southern Great Plains (SGP) site during two nights in April 1994 are discussed. These profiles are shown to be consistent with the simultaneous aerosol size distribution measurements made by a PCASP (Passive Cavity Aerosol Spectrometer Probe) optical particle counter flown on the University of North Dakota Citation aircraft. We describe a technique which uses both lidar and PCASP measurements to derive the dependence of particle size on relative humidity, the aerosol real refractive index n, and estimate the effective single-scattering albedo Omega(sub 0). Values of n ranged between 1.4-1.5 (dry) and 1.37-1.47 (wet); Omega(sub 0) varied between 0.7 and 1.0. The single-scattering albedo derived from this technique is sensitive to the manner in which absorbing particles are represented in the aerosol mixture; representing the absorbing particles as an internal mixture rather than the external mixture assumed here results in generally higher values of Omega(sub 0). The lidar measurements indicate that the change in particle size with relative humidity as measured by the PCASP can be represented in the form discussed by Hattel with the exponent gamma = 0.3 + or - 0.05. The variations in aerosol optical and physical characteristics captured in the lidar and aircraft size distribution measurements are discussed in the context of the meteorological conditions observed during the experiment.
Directory of Open Access Journals (Sweden)
S. Singh
2016-11-01
Full Text Available Biomass burning (BB aerosols have a significant effect on regional climate, and represent a significant uncertainty in our understanding of climate change. Using a combination of cavity ring-down spectroscopy and integrating nephelometry, the single scattering albedo (SSA and Ångstrom absorption exponent (AAE were measured for several North American biomass fuels. This was done for several particle diameters for the smoldering and flaming stage of white pine, red oak, and cedar combustion. Measurements were done over a wider wavelength range than any previous direct measurement of BB particles. While the offline sampling system used in this work shows promise, some changes in particle size distribution were observed, and a thorough evaluation of this method is required. The uncertainty of SSA was 6 %, with the truncation angle correction of the nephelometer being the largest contributor to error. While scattering and extinction did show wavelength dependence, SSA did not. SSA values ranged from 0.46 to 0.74, and were not uniformly greater for the smoldering stage than the flaming stage. SSA values changed with particle size, and not systematically so, suggesting the proportion of tar balls to fractal black carbon change with fuel type/state and particle size. SSA differences of 0.15–0.4 or greater can be attributed to fuel type or fuel state for fresh soot. AAE values were quite high (1.59–5.57, despite SSA being lower than is typically observed in wildfires. The SSA and AAE values in this work do not fit well with current schemes that relate these factors to the modified combustion efficiency of a burn. Combustion stage, particle size, fuel type, and fuel condition were found to have the most significant effects on the intrinsic optical properties of fresh soot, though additional factors influence aged soot.
Estimation of wildfire size and risk changes due to fuels treatments
Cochrane, M.A.; Moran, C.J.; Wimberly, M.C.; Baer, A.D.; Finney, M.A.; Beckendorf, K.L.; Eidenshink, J.; Zhu, Z.
2012-01-01
Human land use practices, altered climates, and shifting forest and fire management policies have increased the frequency of large wildfires several-fold. Mitigation of potential fire behaviour and fire severity have increasingly been attempted through pre-fire alteration of wildland fuels using mechanical treatments and prescribed fires. Despite annual treatment of more than a million hectares of land, quantitative assessments of the effectiveness of existing fuel treatments at reducing the size of actual wildfires or how they might alter the risk of burning across landscapes are currently lacking. Here, we present a method for estimating spatial probabilities of burning as a function of extant fuels treatments for any wildland fire-affected landscape. We examined the landscape effects of more than 72 000 ha of wildland fuel treatments involved in 14 large wildfires that burned 314 000 ha of forests in nine US states between 2002 and 2010. Fuels treatments altered the probability of fire occurrence both positively and negatively across landscapes, effectively redistributing fire risk by changing surface fire spread rates and reducing the likelihood of crowning behaviour. Trade offs are created between formation of large areas with low probabilities of increased burning and smaller, well-defined regions with reduced fire risk.
Relative estimates of TCA cycle pool size from 14CO2 production profiles
International Nuclear Information System (INIS)
Kelleher, J.K.; Cesta, M.L.; Holleran, A.L.
1986-01-01
In metabolic and isotopic steady state, the rate of 14 CO 2 production by TCA cycle intermediates labeled at different positions is linear. However, before the system reaches isotopic steady state, the rate of 14 CO 2 production is non-linear. The x-intercept extrapolated from the linear phase indicates the turnover rate of all metabolic pools the tracer must pass through. By exposing identical systems to 14 C succinate labeled in different positions, the contribution of TCA cycle pools to the non-linear phase may be considered. Specifically, the extrapolated x-intercept for [2,3 14 C] succinate will be greater than the x-intercept for [1,4 14 C] succinate if the TCA cycle pools are a contributing factor to the non-linear phase. The authors have used this method to analyze pyruvate oxidation in AS 30D hepatoma cells. They found that the extrapolated x-intercepts for the two tracers were identical. This indicates that the non-linear phase resulted from equilibration of the tracer with pools prior to entering the TCA cycle, i.e. lactate. Using this technique, it may be possible to estimate the variations in TCA cycle pool sizes in vivo
Estimated ventricle size using Evans index: reference values from a population-based sample.
Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C
2017-03-01
Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.
The Effect of Childhood Family Size on Fertility in Adulthood: New Evidence From IV Estimation.
Cools, Sara; Kaldager Hart, Rannveig
2017-02-01
Although fertility is positively correlated across generations, the causal effect of children's experience with larger sibships on their own fertility in adulthood is poorly understood. With the sex composition of the two firstborn children as an instrumental variable, we estimate the effect of sibship size on adult fertility using high-quality data from Norwegian administrative registers. Our study sample is all firstborns or second-borns during the 1960s in Norwegian families with at least two children (approximately 110,000 men and 104,000 women). An additional sibling has a positive effect on male fertility, mainly causing them to have three children themselves, but has a negative effect on female fertility at the same margin. Investigation into mediators reveals that mothers of girls shift relatively less time from market to family work when an additional child is born. We speculate that this scarcity in parents' time makes girls aware of the strains of life in large families, leading them to limit their own number of children in adulthood.
Size of the coming solar cycle 24 based on Ohl's Precursor Method, final estimate
Directory of Open Access Journals (Sweden)
R. P. Kane
2010-07-01
Full Text Available In Ohl's Precursor Method (Ohl, 1966, 1976, the geomagnetic activity during the declining phase of a sunspot cycle is shown to be well correlated with the size (maximum sunspot number Rz(max of the next cycle. For solar cycle 24, Kane (2007a used aa(min=15.5 (12-month running mean, which occurred during March–May of 2006 and made a preliminary estimate Rz(max=124±26 (12-month running mean. However, in the next few months, the aa index first increased and then decreased to a new low value of 14.8 in July 2007. With this new low value, the prediction was Rz(max=117±26 (12-month running mean. However, even this proved a false signal. Since then, the aa values have decreased considerably and the last 12-monthly value is 8.7, centered at May 2009. For solar cycle 24, using aa(min=8.7, the latest prediction is, Rz(max=58.0±25.0.
Fencl, Martin; Jörg, Rieckermann; Vojtěch, Bareš
2015-04-01
Commercial microwave links (MWL) are point-to-point radio systems which are used in backhaul networks of cellular operators. For several years, they have been suggested as rainfall sensors complementary to rain gauges and weather radars, because, first, they operate at frequencies where rain drops represent significant source of attenuation and, second, cellular networks almost completely cover urban and rural areas. Usually, path-average rain rates along a MWL are retrieved from the rain-induced attenuation of received MWL signals with a simple model based on a power law relationship. The model is often parameterized based on the characteristics of a particular MWL, such as frequency, polarization and the drop size distribution (DSD) along the MWL. As information on the DSD is usually not available in operational conditions, the model parameters are usually considered constant. Unfortunately, this introduces bias into rainfall estimates from MWL. In this investigation, we propose a generic method to eliminate this bias in MWL rainfall estimates. Specifically, we search for attenuation statistics which makes it possible to classify rain events into distinct groups for which same power-law parameters can be used. The theoretical attenuation used in the analysis is calculated from DSD data using T-Matrix method. We test the validity of our approach on observations from a dedicated field experiment in Dübendorf (CH) with a 1.85-km long commercial dual-polarized microwave link transmitting at a frequency of 38 GHz, an autonomous network of 5 optical distrometers and 3 rain gauges distributed along the path of the MWL. The data is recorded at a high temporal resolution of up to 30s. It is further tested on data from an experimental catchment in Prague (CZ), where 14 MWLs, operating at 26, 32 and 38 GHz frequencies, and reference rainfall from three RGs is recorded every minute. Our results suggest that, for our purpose, rain events can be nicely characterized based on
How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants.
Directory of Open Access Journals (Sweden)
Simon N Chapman
Full Text Available Information on an organism's body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference taken directly from a large population of semi-captive animals in Myanmar (n = 404. We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151. Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings.
Xue, Ying; Ren, Yiping; Meng, Wenrong; Li, Long; Mao, Xia; Han, Dongyan; Ma, Qiuyun
2013-09-01
Cephalopods play key roles in global marine ecosystems as both predators and preys. Regressive estimation of original size and weight of cephalopod from beak measurements is a powerful tool of interrogating the feeding ecology of predators at higher trophic levels. In this study, regressive relationships among beak measurements and body length and weight were determined for an octopus species ( Octopus variabilis), an important endemic cephalopod species in the northwest Pacific Ocean. A total of 193 individuals (63 males and 130 females) were collected at a monthly interval from Jiaozhou Bay, China. Regressive relationships among 6 beak measurements (upper hood length, UHL; upper crest length, UCL; lower hood length, LHL; lower crest length, LCL; and upper and lower beak weights) and mantle length (ML), total length (TL) and body weight (W) were determined. Results showed that the relationships between beak size and TL and beak size and ML were linearly regressive, while those between beak size and W fitted a power function model. LHL and UCL were the most useful measurements for estimating the size and biomass of O. variabilis. The relationships among beak measurements and body length (either ML or TL) were not significantly different between two sexes; while those among several beak measurements (UHL, LHL and LBW) and body weight (W) were sexually different. Since male individuals of this species have a slightly greater body weight distribution than female individuals, the body weight was not an appropriate measurement for estimating size and biomass, especially when the sex of individuals in the stomachs of predators was unknown. These relationships provided essential information for future use in size and biomass estimation of O. variabilis, as well as the estimation of predator/prey size ratios in the diet of top predators.
Harish, Varun; Raymond, Andrew P; Issler, Andrea C; Lajevardi, Sepehr S; Chang, Ling-Yun; Maitz, Peter K M; Kennedy, Peter
2015-02-01
The purpose of this study was to compare burn size estimation between referring centres and Burn Units in adult patients transferred to Burn Units in Sydney, Australia. A review of all adults transferred to Burn Units in Sydney, Australia between January 2009 and August 2013 was performed. The TBSA estimated by the referring institution was compared with the TBSA measured at the Burns Unit. There were 698 adults transferred to a Burns Unit. Equivalent TBSA estimation between the referring hospital and Burns Unit occurred in 30% of patients. Overestimation occurred at a ratio exceeding 3:1 with respect to underestimation, with the difference between the referring institutions and Burns Unit estimation being statistically significant (Pburn-injured patients as well as in patients transferred more than 48h after the burn (Pburn (Pburns (≥20% TBSA) were found to have more satisfactory burn size estimations compared with less severe injuries (burn size assessment by referring centres. The systemic tendency for overestimation occurs throughout the entire TBSA spectrum, and persists with increasing time after the burn. Underestimation occurs less frequently but rises with increasing time after the burn and with increasing TBSA. Severe burns (≥20% TBSA) are more accurately estimated by the referring hospital. The inaccuracies in burn size assessment have the potential to result in suboptimal treatment and inappropriate referral to specialised Burn Units. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
National Research Council Canada - National Science Library
Liu, C
2001-01-01
The objectives in this report are to: determine the inherent critical initial crack size in a particulate composite material, determine the statistical distribution function of the inherent critical crack size, normal distribution, two...
Testing the quantity–quality model of fertility: Estimation using unrestricted family size models
Mogstad, Magne; Wiswall, Matthew
2016-01-01
We examine the relationship between child quantity and quality. Motivated by the theoretical ambiguity regarding the sign of the marginal effects of additional siblings on children's outcomes, our empirical model allows for an unrestricted relationship between family size and child outcomes. We find that the conclusion in Black, Devereux, and Salvanes (2005) of no family size effect does not hold after relaxing their linear specification in family size. We find nonzero effects of family size ...
Esfahani, Milad Rabbani; Pallem, Vasanta L.; Stretz, Holly A.; Wells, Martha J. M.
2018-01-01
Knowledge of the interactions between gold nanoparticles (GNPs) and dissolved organic matter (DOM) is significant in the development of detection devices for environmental sensing, studies of environmental fate and transport, and advances in antifouling water treatment membranes. The specific objective of this research was to spectroscopically investigate the fundamental interactions between citrate-stabilized gold nanoparticles (CT-GNPs) and DOM. Studies indicated that 30 and 50 nm diameter GNPs promoted disaggregation of the DOM. This result-disaggregation of an environmentally important polyelectrolyte-will be quite useful regarding antifouling properties in water treatment and water-based sensing applications. Furthermore, resonance Rayleigh scattering results showed significant enhancement in the UV range which can be useful to characterize DOM and can be exploited as an analytical tool to better sense and improve our comprehension of nanomaterial interactions with environmental systems. CT-GNPs having core size diameters of 5, 10, 30, and 50 nm were studied in the absence and presence of added DOM at 2 and 8 ppm at low ionic strength and near neutral pH (6.0-6.5) approximating surface water conditions. Interactions were monitored by cross-interpretation among ultraviolet (UV)-visible extinction spectroscopy, excitation-emission matrix (EEM) spectroscopy (emission and Rayleigh scattering), and dynamic light scattering (DLS). This comprehensive combination of spectroscopic analyses lends new insights into the antifouling behavior of GNPs. The CT-GNP-5 and -10 controls emitted light and aggregated. In contrast, the CT-GNP-30 and CT-GNP-50 controls scattered light intensely, but did not aggregate and did not emit light. The presence of any CT-GNP did not affect the extinction spectra of DOM, and the presence of DOM did not affect the extinction spectra of the CT-GNPs. The emission spectra (visible range) differed only slightly between calculated and actual
National Research Council Canada - National Science Library
Patterson, Phillip
2000-01-01
.... Army Research Laboratory (ARL), Aberdeen Proving Ground (APG), MD. The focus of the work centers on the instrument setup and operation for performing particle size determinations on a polydispersed, camouflage paint conforming to the U.S...
Pollack, J. B.; Cuzzi, J. N.
1980-01-01
A semiempirical theory is developed which is based on simple physical principles and comparisons with laboratory measurements. The ultimate utility of this approach rests on its ability to successfully reproduce the observed single-scattering phase function for a wide variety of particle shapes, sizes and refractive indices. This approximate theory is developed for evaluating the interaction of randomly oriented, nonspherical particles with the total intensity component of electromagnetic radiation. Mie theory is used when the particle size parameter x (ratio of particle circumference to wavelength) is less than some upper bound x sub zero (about 5). For x greater than x sub zero, the interaction is divided into three components: diffraction, external reflection and transmission. The application of the theory is illustrated by considering the influence of the shape of tropospheric aerosols on their contribution to the earth's global albedo.
Safarnejad, Ali; Groot, Wim; Pavlova, Milena
2018-01-30
Estimation of the size of populations at risk of HIV is a key activity in the surveillance of the HIV epidemic. The existing framework for considering future research needs may provide decision-makers with a basis for a fair process of deciding on the methods of the estimation of the size of key populations at risk of HIV. This study explores the extent to which stakeholders involved with population size estimation agree with this framework, and thus, the study updates the framework. We conducted 16 in-depth interviews with key informants from city and provincial governments, NGOs, research institutes, and the community of people at risk of HIV. Transcripts were analyzed and reviewed for significant statements pertaining to criteria. Variations and agreement around criteria were analyzed, and emerging criteria were validated against the existing framework. Eleven themes emerged which are relevant to the estimation of the size of populations at risk of HIV in Viet Nam. Findings on missing criteria, inclusive participation, community perspectives and conflicting weight and direction of criteria provide insights for an improved framework for the prioritization of population size estimation methods. The findings suggest that the exclusion of community members from decision-making on population size estimation methods in Viet Nam may affect the validity, use, and efficiency of the evidence generated. However, a wider group of decision-makers, including community members among others, may introduce diverse definitions, weight and direction of criteria. Although findings here may not apply to every country with a transitioning economy or to every emerging epidemic, the principles of fair decision-making, value of community participation in decision-making and the expected challenges faced, merit consideration in every situation.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.
2013-01-01
and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...
Scattering of acoustic waves by small crustaceans
Andreeva, I. B.; Tarasov, L. L.
2003-03-01
Features of underwater sound scattering by small crustaceans are considered. The scattering data are obtained with the use of unique instrumentation that allows one to measure quantitative scattering characteristics (backscattering cross sections and angular scattering patterns) for crustaceans of different sizes, at different frequencies (20 200 kHz) and different insonification aspects. A computational model of crustaceans is considered with allowance for both the soft tissues of the main massive part of the animal's body and the stiff armour. The model proves to be advantageous for explaining some scattering features observed in the experiments. The scattering cross sections of crustaceans measured by other researchers are presented in a unified form appropriate for comparison. Based on such a quantitative comparison, relatively simple approximate empirical formulas are proposed for estimating the backscattering cross sections of small (within several centimeters) marine crustaceans in a broad frequency range.
Solanki, Rekha Garg; Rajaram, Poolla; Bajpai, P. K.
2018-05-01
This work is based on the growth, characterization and estimation of lattice strain and crystallite size in CdS nanoparticles by X-ray peak profile analysis. The CdS nanoparticles were synthesized by a non-aqueous solvothermal method and were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman and UV-visible spectroscopy. XRD confirms that the CdS nanoparticles have the hexagonal structure. The Williamson-Hall (W-H) method was used to study the X-ray peak profile analysis. The strain-size plot (SSP) was used to study the individual contributions of crystallite size and lattice strain from the X-rays peaks. The physical parameters such as strain, stress and energy density values were calculated using various models namely, isotropic strain model, anisotropic strain model and uniform deformation energy density model. The particle size was estimated from the TEM images to be in the range of 20-40 nm. The Raman spectrum shows the characteristic optical 1LO and 2LO vibrational modes of CdS. UV-visible absorption studies show that the band gap of the CdS nanoparticles is 2.48 eV. The results show that the crystallite size estimated from Scherrer's formula, W-H plots, SSP and the particle size calculated by TEM images are approximately similar.
Angly, Florent E.; Willner, Dana; Prieto-Dav?, Alejandra; Edwards, Robert A.; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A.; Barott, Katie; Cottrell, Matthew T.; Desnues, Christelle; Dinsdale, Elizabeth A.; Furlan, Mike; Haynes, Matthew; Henn, Matthew R.; Hu, Yongfei
2009-01-01
Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimate...
Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav
2018-04-01
Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.
Rossi, Carla
2013-06-01
The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.
Directory of Open Access Journals (Sweden)
Daniel H Monson
Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.
An Estimation of the Number and Size of Atoms in a Printed Period
Schaefer, Beth; Collett, Edward; Tabor-Morris, Anne; Croman, Joseph
2011-01-01
Elementary school students learn that atoms are very, very small. Students are also taught that atoms (and molecules) are the fundamental constituents of the material world. Numerical values of their size are often given, but, nevertheless, it is difficult to imagine their size relative to one's everyday surroundings. In order for students to…
Semi-empirical formula for large pore-size estimation from o-Ps annihilation lifetime
International Nuclear Information System (INIS)
Nguyen Duc Thanh; Tran Quoc Dung; Luu Anh Tuyen; Khuong Thanh Tuan
2007-01-01
The o-Ps annihilation rate in large pore was investigated by the semi-classical approach. The semi-empirical formula that simply correlates between the pore size and the o-Ps lifetime was proposed. The calculated results agree well with experiment in the range from some angstroms to several ten nanometers size of pore. (author)
Estimation of the PCR efficiency based on a size-dependent modelling of the amplification process
Lalam, N.; Jacob, C.; Jagers, P.
2005-01-01
We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé–Galton–Watson transient phase and then having a saturation near-critical size-dependent phase. This model based on the concept of saturation allows one to
Magrath, Michael J. L.; Van Lieshout, Emile; Pen, Ido; Visser, G. Henk; Komdeur, Jan
2007-01-01
1. The parents of sexually size-dimorphic offspring are often assumed to invest more resources producing individuals of the larger sex. A range of different methods have been employed to estimate relative expenditure on the sexes, including quantifying sex-specific offspring growth, food intake,
Magrath, Michael J. L.; Van Lieshout, Emile; Pen, Ido; Visser, G. Henk; Komdeur, Jan
1. The parents of sexually size-dimorphic offspring are often assumed to invest more resources producing individuals of the larger sex. A range of different methods have been employed to estimate relative expenditure on the sexes, including quantifying sex-specific offspring growth, food intake,
Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.
2006-01-01
A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the
Directory of Open Access Journals (Sweden)
Azwarfarid Manca
2017-09-01
Full Text Available The dwindling number of the tri-spine horseshoe crab, Tachypleus tridentatus has been reported globally and its status in Malaysia is not much known. Study on dimorphism in adult body sizes and population size estimation were conducted using capture–mark–recapture method of adult T. tridentatus in Tawau, Sabah. Camry the estimated population sizes of T. tridentatus ranged from 182 to 1095 with 95% confident limits of 56–42,942 individuals (Schnabel formula. The multivariate discriminant Hotelling’s T2 test verifies the sexual size dimorphism among the adult T. tridentatus with 97.7% separation among sexes (Hotelling’s T2 = 778.49, F = 152.85, p < 0.001 with females being larger and heavier than the male individuals. The number estimated from the study is the first reported for T. tridentatus in Malaysia, particularly in Sabah. Even though this number may slightly overestimate the actual population size in the area owing to the low number of individuals recaptured, for now it could serve as baseline data for horseshoe crab management purpose.
Optimizing cone beam CT scatter estimation in egs-cbct for a clinical and virtual chest phantom
International Nuclear Information System (INIS)
Thing, Rune Slot; Mainegra-Hing, Ernesto
2014-01-01
Purpose: Cone beam computed tomography (CBCT) image quality suffers from contamination from scattered photons in the projection images. Monte Carlo simulations are a powerful tool to investigate the properties of scattered photons.egs-cbct, a recent EGSnrc user code, provides the ability of performing fast scatter calculations in CBCT projection images. This paper investigates how optimization of user inputs can provide the most efficient scatter calculations. Methods: Two simulation geometries with two different x-ray sources were simulated, while the user input parameters for the efficiency improving techniques (EITs) implemented inegs-cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. Results: The optimal EIT parameter selection depends on the simulation geometry and x-ray source. Forced detection improved the scatter calculation efficiency by 80%. Delta transport improved calculation efficiency by a further 34%, while particle splitting combined with Russian roulette improved the efficiency by a factor of 45 or more. Combining these variance reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. Conclusions: Using the built-in EITs inegs-cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input parameters to the specific simulation geometry. Realizing the full potential of the denoising algorithm requires keeping the statistical uncertainty below a threshold value above which the efficiency drops exponentially
Pollack, J. B.; Cuzzi, J. N.
1980-01-01
An approximate method is proposed for evaluating the interaction of randomly oriented, nonspherical particles with the total intensity component of electromagnetic radiation. When the particle size parameter, x, the ratio of particle circumference to wavelength, is less than some upper bound x(o) (about 5), Mie theory is used. For x greater than x(o), the interaction is divided into three components: diffraction, external reflection, and transmission. Physical optics theory is used to obtain the first of these components; geometrical optics theory is applied to the second; and a simple parameterization is employed for the third. The predictions of this theory are found to be in very good agreement with laboratory measurements for a wide variety of particle shapes, sizes, and refractive indexes. Limitations of the theory are also noted.
Tanner-Smith, Emily E; Tipton, Elizabeth
2014-03-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Estimating group size: effects of category membership, differential construal and selective exposure
Bosveld, W.; Koomen, W.; van der Pligt, J.
1996-01-01
Examined the role of category membership, differential construal, and selective exposure in consensus estimation concerning the social categorization of religion. 54 involved and less involved Christians and 40 non-believers were asked to estimate the percentage of Christians in the Netherlands
Sample Size for Estimation of G and Phi Coefficients in Generalizability Theory
Atilgan, Hakan
2013-01-01
Problem Statement: Reliability, which refers to the degree to which measurement results are free from measurement errors, as well as its estimation, is an important issue in psychometrics. Several methods for estimating reliability have been suggested by various theories in the field of psychometrics. One of these theories is the generalizability…
Numerical method for estimating the size of chaotic regions of phase space
International Nuclear Information System (INIS)
Henyey, F.S.; Pomphrey, N.
1987-10-01
A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs
Fukushima, Taku; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
Red blood cell (RBC) aggregation, as one of the determinants of blood viscosity, plays an important role in blood rheology, including the condition of blood. RBC aggregation is induced by the adhesion of RBCs when the electrostatic repulsion between RBCs weakens owing to increases in protein and saturated fatty acid levels in blood, excessive RBC aggregation leads to various circulatory diseases. This study was conducted to establish a noninvasive quantitative method for assessment of RBC aggregation. The power spectrum of ultrasonic RF echoes from nonaggregating RBCs, which shows the frequency property of scattering, exhibits Rayleigh behavior. On the other hand, ultrasonic RF echoes from aggregating RBCs contain the components of reflection, which have no frequency dependence. By dividing the measured power spectrum of echoes from RBCs in the lumen by that of echoes from a posterior wall of the vein in the dorsum manus, the attenuation property of the propagating medium and the frequency responses of transmitting and receiving transducers are removed from the former spectrum. RBC aggregation was assessed by the diameter of a scatterer, which was estimated by minimizing the square difference between the measured normalized power spectrum and the theoretical power spectrum. In this study, spherical scatterers with diameters of 5, 11, 15, and 30 µm were measured in basic experiments. The estimated scatterer diameters were close to the actual diameters. Furthermore, the transient change of the scatterer diameters were measured in an in vivo experiment with respect to a 24-year-old healthy male during the avascularization using a cuff. The estimated diameters (12-22 µm) of RBCs during avascularization were larger than the diameters (4-8 µm) at rest and after recirculation. These results show the possibility of the use of the proposed method for noninvasive assessment of RBC aggregation.
Directory of Open Access Journals (Sweden)
István Makra
2015-01-01
• The concentration of virus nanoparticles can be calculated based on the two measured scattered light intensities by knowing the refractive index of the dispersing solution, of the polymer and virus nanoparticles as well as their relative sphere equivalent diameters.
Estimation of the target stem-cell population size in chronic myeloid leukemogenesis
International Nuclear Information System (INIS)
Radivoyevitch, T.; Ramsey, M.J.; Tucker, J.D.
1999-01-01
Estimation of the number of hematopoietic stem cells capable of causing chronic myeloid leukemia (CML) is relevant to the development of biologically based risk models of radiation-induced CML. Through a comparison of the age structure of CML incidence data from the Surveillance, Epidemiology, and End Results (SEER) Program and the age structure of chromosomal translocations found in healthy subjects, the number of CML target stem cells is estimated for individuals above 20 years of age. The estimation involves three steps. First, CML incidence among adults is fit to an exponentially increasing function of age. Next, assuming a relatively short waiting time distribution between BCR-ABL induction and the appearance of CML, an exponential age function with rate constants fixed to the values found for CML is fitted to the translocation data. Finally, assuming that translocations are equally likely to occur between any two points in the genome, the parameter estimates found in the first two steps are used to estimate the number of target stem cells for CML. The population-averaged estimates of this number are found to be 1.86 x 10 8 for men and 1.21 x 10 8 for women; the 95% confidence intervals of these estimates are (1.34 x 10 8 , 2.50 x 10 8 ) and (0.84 x 10 8 , 1.83 x 10 8 ), respectively. (orig.)
DEFF Research Database (Denmark)
Mailund, Thomas; Dutheil, Julien; Hobolth, Asger
2011-01-01
event has occurred to split them apart. The size of these segments of constant divergence depends on the recombination rate, but also on the speciation time, the effective population size of the ancestral population, as well as demographic effects and selection. Thus, inference of these parameters may......, and the ancestral effective population size. The model is efficient enough to allow inference on whole-genome data sets. We first investigate the power and consistency of the model with coalescent simulations and then apply it to the whole-genome sequences of the two orangutan sub-species, Bornean (P. p. pygmaeus......) and Sumatran (P. p. abelii) orangutans from the Orangutan Genome Project. We estimate the speciation time between the two sub-species to be thousand years ago and the effective population size of the ancestral orangutan species to be , consistent with recent results based on smaller data sets. We also report...
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling
Estimation of optimal size of plots for experiments with radiometer in ...
African Journals Online (AJOL)
Aghomotsegin
2015-07-29
Jul 29, 2015 ... 2362 Afr. J. Biotechnol. damage ... methods for its estimation based on different principles ... and shapes are simulated through the sum of contiguous ... coefficient c and the linear coefficient d, both from the line yr, are fixed.
International Nuclear Information System (INIS)
Bundo-Morita, K.; Gibson, S.; Lenard, J.
1987-01-01
The target sizes associated with fusion and hemolysis carried out by Sendai virus envelope glycoproteins were determined by radiation inactivation analysis. The target size for influenza virus mediated fusion with erythrocyte ghosts at pH 5.0 was also determined for comparison. Sendai-mediated fusion with erythrocyte ghosts at pH 7.0 was likewise inactivated exponentially with increasing radiation dose, yielding a target size of 60 +/- 6 kDa, a value consistent with the molecular weight of a single F-protein molecule. The inactivation curve for Sendai-mediated fusion with cardiolipin liposomes at pH 7.0, however, was more complex. Assuming a multiple target-single hit model, the target consisted of 2-3 units of ca. 60 kDa each. A similar target was seen if the liposome contained 10% gangliosides or if the reaction was measured at pH 5.0, suggesting that fusion occurred by the same mechanism at high and low pH. A target size of 261 +/- 48 kDa was found for Sendai-induced hemolysis, in contrast with influenza, which had a more complex target size for this activity. Sendai virus fusion thus occurs by different mechanisms depending upon the nature of the target membrane, since it is mediated by different functional units. Hemolysis is mediated by a functional unit different from that associated with erythrocyte ghost fusion or with cardiolipin liposome fusion
International Nuclear Information System (INIS)
Yu, Lingda; Wang, Guangfu; Zhang, Renjiang
2013-01-01
Full text: During 2008-2012, size-segregated aerosol samples were collected using an eight-stage cascade impactor at Beijing Normal University (BNU) Site, China. These samples were analyzed using particle induced X-ray emission (PIXE) analysis for concentrations of 21 elements consisting of Mg, AI, Si, P, S, CI, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Ba and Pb. The size-resolved data sets were then analyzed using the positive matrix factorization (PMF) technique in order to identify possible sources and estimate their contribution to particulate matter mass. Nine sources were resolved in eight size ranges (025 ∼ 16μm) and included secondary sulphur, motor vehicles, coal combustion; oil combustion, road dust, biomass burning, soil dust, diesel vehicles and metal processing. PMF analysis of size-resolved source contributions showed that natural sources represented by soil dust and road dust contributed about 57% to the predicted primary particulate matter (PM) mass in the coarse size range(>2μm). On the other hand, anthropogenic sources such as secondary sulphur, coal and oil combustion, biomass burning and motor vehicle contributed about 73% in the fine size range <2μm). The diesel vehicles and secondary sulphur source contributed the most in the ultra-fine size range (<0.25μm) and was responsible for about 52% of the primary PM mass. (author)
Energy Technology Data Exchange (ETDEWEB)
Yu, Lingda [Key Laboratory of Beam Technology and Materiais Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing (China); Wang, Guangfu, E-mail: guangfuw@bnu.edu.cn [Beijing Radiation Center, Beijing (China); Zhang, Renjiang [Key Laboratory of Regional Climate-Environment Research for Temperate Eas tAsia (RCE-TEA), Institute of Atmospheric Physics, Chinese Academy of Science, Beijing (China)
2013-07-01
Full text: During 2008-2012, size-segregated aerosol samples were collected using an eight-stage cascade impactor at Beijing Normal University (BNU) Site, China. These samples were analyzed using particle induced X-ray emission (PIXE) analysis for concentrations of 21 elements consisting of Mg, AI, Si, P, S, CI, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Ba and Pb. The size-resolved data sets were then analyzed using the positive matrix factorization (PMF) technique in order to identify possible sources and estimate their contribution to particulate matter mass. Nine sources were resolved in eight size ranges (025 ∼ 16μm) and included secondary sulphur, motor vehicles, coal combustion; oil combustion, road dust, biomass burning, soil dust, diesel vehicles and metal processing. PMF analysis of size-resolved source contributions showed that natural sources represented by soil dust and road dust contributed about 57% to the predicted primary particulate matter (PM) mass in the coarse size range(>2μm). On the other hand, anthropogenic sources such as secondary sulphur, coal and oil combustion, biomass burning and motor vehicle contributed about 73% in the fine size range <2μm). The diesel vehicles and secondary sulphur source contributed the most in the ultra-fine size range (<0.25μm) and was responsible for about 52% of the primary PM mass. (author)
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Determination of subcellular compartment sizes for estimating dose variations in radiotherapy
International Nuclear Information System (INIS)
Poole, Christopher M.; Ahnesjo, Anders; Enger, Shirin A.
2015-01-01
The variation in specific energy absorbed to different cell compartments caused by variations in size and chemical composition is poorly investigated in radiotherapy. The aim of this study was to develop an algorithm to derive cell and cell nuclei size distributions from 2D histology samples, and build 3D cellular geometries to provide Monte Carlo (MC)-based dose calculation engines with a morphologically relevant input geometry. Stained and unstained regions of the histology samples are segmented using a Gaussian mixture model, and individual cell nuclei are identified via thresholding. Delaunay triangulation is applied to determine the distribution of distances between the centroids of nearest neighbour cells. A pouring simulation is used to build a 3D virtual tissue sample, with cell radii randomised according to the cell size distribution determined from the histology samples. A slice with the same thickness as the histology sample is cut through the 3D data and characterised in the same way as the measured histology. The comparison between this virtual slice and the measured histology is used to adjust the initial cell size distribution into the pouring simulation. This iterative approach of a pouring simulation with adjustments guided by comparison is continued until an input cell size distribution is found that yields a distribution in the sliced geometry that agrees with the measured histology samples. The thus obtained morphologically realistic 3D cellular geometry can be used as input to MC-based dose calculation programs for studies of dose response due to variations in morphology and size of tumour/healthy tissue cells/nuclei, and extracellular material. (authors)
A New, Simple Method for Estimating Pleural Effusion Size on CT Scans
Moy, Matthew P.; Berko, Netanel S.; Godelman, Alla; Jain, Vineet R.; Haramati, Linda B.
2013-01-01
Background: There is no standardized system to grade pleural effusion size on CT scans. A validated, systematic grading system would improve communication of findings and may help determine the need for imaging guidance for thoracentesis. Methods: CT scans of 34 patients demonstrating a wide range of pleural effusion sizes were measured with a volume segmentation tool and reviewed for qualitative and simple quantitative features related to size. A classification rule was developed using the features that best predicted size and distinguished among small, moderate, and large effusions. Inter-reader agreement for effusion size was assessed on the CT scans for three groups of physicians (radiology residents, pulmonologists, and cardiothoracic radiologists) before and after implementation of the classification rule. Results: The CT imaging features found to best classify effusions as small, moderate, or large were anteroposterior (AP) quartile and maximum AP depth measured at the midclavicular line. According to the decision rule, first AP-quartile effusions are small, second AP-quartile effusions are moderate, and third or fourth AP-quartile effusions are large. In borderline cases, AP depth is measured with 3-cm and 10-cm thresholds for the upper limit of small and moderate, respectively. Use of the rule improved interobserver agreement from κ = 0.56 to 0.79 for all physicians, 0.59 to 0.73 for radiology residents, 0.54 to 0.76 for pulmonologists, and 0.74 to 0.85 for cardiothoracic radiologists. Conclusions: A simple, two-step decision rule for sizing pleural effusions on CT scans improves interobserver agreement from moderate to substantial levels. PMID:23632863
Re-estimating sample size in cluster randomized trials with active recruitment within clusters
van Schie, Sander; Moerbeek, Mirjam
2014-01-01
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Improving Accuracy of Portion-Size Estimations through a Stimulus Equivalence Paradigm
Hausman, Nicole L.; Borrero, John C.; Fisher, Alyssa; Kahng, SungWoo
2014-01-01
The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are…
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Pedro, E-mail: pedrocarv@coc.ufrj.br [Computational Modelling in Engineering and Geophysics Laboratory (LAMEMO), Department of Civil Engineering, COPPE, Federal University of Rio de Janeiro, Av. Pedro Calmon - Ilha do Fundão, 21941-596 Rio de Janeiro (Brazil); Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Marques, Rui Cunha, E-mail: pedro.c.carvalho@tecnico.ulisboa.pt [Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal)
2016-02-15
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. - Highlights: • This study aims to search for economies of size and scope in the water sector; • The usefulness of the application of Bayesian methods is highlighted; • Important economies of output density, economies of size, economies of vertical integration and economies of scope are found.
Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)
2017-07-15
The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Functional size of vacuolar H+ pumps: Estimates from radiation inactivation studies
International Nuclear Information System (INIS)
Sarafian, V.; Poole, R.J.
1991-01-01
The PPase and the ATPase from red beet (Beta vulgaris) vacuolar membranes were subjected to radiation inactivation by a 60 Co source in both the native tonoplast and detergent-solubilized states, in order to determine their target molecular sizes. Analysis of the residual phosphohydrolytic and proton transport activities, after exposure to varying doses of radiation, yielded exponential relationships between the activities and radiation doses. The deduced target molecular sizes for PPase activity in native and solubilized membranes were 125kD and 259kD respectively and 327kD for H + -transport. This suggests that the minimum number of subunits of 67kD for PPi hydrolysis is two in the native state and four after Triton X-100 solubilization. At least four subunits would be required for H + -translocation. Analysis of the ATPase inactivation patterns revealed target sizes of 384kD and 495kD for ATP hydrolysis in native and solubilized tonoplast respectively and 430kD for H + -transport. These results suggest that the minimum size for hydrolytic or transport functions is relatively constant for the ATPase
How much data resides in a web collection: how to estimate size of a web collection
Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice
2013-01-01
With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in
The use of 65Zn for estimating group size of brown hyaenas Hyaena ...
African Journals Online (AJOL)
MacDonald 1983), is influenced by the quality of resources within a territory (Mills 1982). Group size is, however, difficult to determine accurately using routine methods (i.e., direct counts and mark recapture techniques) owing to the shy, elusive and nocturnal habits of brown hyaenas and the physiognomic characteristics of ...
Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill
2017-01-01
Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...
Using LiDAR derivatives to estimate sediment grain size on beaches in False Bay
CSIR Research Space (South Africa)
Burns
2017-05-01
Full Text Available of these parameters (beach slope, grain size, wave energy) can therefore theoretically be used as a proxy to predict the other factors. This information would be of particular interest for coastal protection and disaster risk management. Field assessments and surveys...
Estimates of zooplankton abundance and size distribution with the Optical Plankton Counter (OPC)
DEFF Research Database (Denmark)
Wieland, Kai; Petersen, D.; Schnack, D.
1997-01-01
The capability of the Optical Plankton Count er (OPC) to examine the abundance and size distribution of zooplankton was tested in Storfjorden, Norway, in June 1993. Selected material obtained from net sampling was measured with a laboratory version of the OPC and compared with microscope analysis...
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Estimating average tree crown size using high-resolution airborne data
Czech Academy of Sciences Publication Activity Database
Brovkina, Olga; Latypov, I.; Cienciala, E.
2015-01-01
Roč. 9, may 13 (2015), 096053-1-096053-13 ISSN 1931-3195 R&D Projects: GA MŠk(CZ) LO1415; GA MŠk OC09001 Institutional support: RVO:67179843 Keywords : crown size * airborne data * spruce * granulometry Subject RIV: GK - Forestry Impact factor: 0.937, year: 2015
Estimating sample size for a small-quadrat method of botanical ...
African Journals Online (AJOL)
Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.
The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...
International Nuclear Information System (INIS)
Carvalho, Pedro; Marques, Rui Cunha
2016-01-01
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. - Highlights: • This study aims to search for economies of size and scope in the water sector; • The usefulness of the application of Bayesian methods is highlighted; • Important economies of output density, economies of size, economies of vertical integration and economies of scope are found.
Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.
2018-03-01
The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.
Estimation of (co)variances for genomic regions of flexible sizes
DEFF Research Database (Denmark)
Sørensen, Lars P; Janss, Luc; Madsen, Per
2012-01-01
was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...
Preoperative estimation of tibial nail length--because size does matter.
LENUS (Irish Health Repository)
Galbraith, J G
2012-11-01
Selecting the correct tibial nail length is essential for satisfactory outcomes. Nails that are inserted and are found to be of inappropriate length should be removed. Accurate preoperative nail estimation has the potential to reduce intra-operative errors, operative time and radiation exposure.
Estimating the Size and Cost of the STD Prevention Services Safety Net.
Gift, Thomas L; Haderxhanaj, Laura T; Torrone, Elizabeth A; Behl, Ajay S; Romaguera, Raul A; Leichliter, Jami S
2015-01-01
The Patient Protection and Affordable Care Act is expected to reduce the number of uninsured people in the United States during the next eight years, but more than 10% are expected to remain uninsured. Uninsured people are one of the main populations using publicly funded safety net sexually transmitted disease (STD) prevention services. Estimating the proportion of the uninsured population expected to need STD services could help identify the potential demand for safety net STD services and improve program planning. In 2013, an estimated 8.27 million people met the criteria for being in need of STD services. In 2023, 4.70 million uninsured people are expected to meet the criteria for being in need of STD services. As an example, the cost in 2014 U.S. dollars of providing chlamydia screening to these people was an estimated $271.1 million in 2013 and is estimated to be $153.8 million in 2023. A substantial need will continue to exist for safety net STD prevention services in coming years.
Age estimation, growth rate and size at sexual maturity of tigerfish ...
African Journals Online (AJOL)
A total of 206 tigerfish Hydrocynus vittatus, collected by angling in August 2005, 2006 and 2007, was assessed for sexual maturity and relative ages were estimated from 135 of these, using scales and whole and sectioned otoliths. Sectioned otoliths were the most appropriate method for ageing H. vittatus of up to 20 years ...
Directory of Open Access Journals (Sweden)
John A Sved
Full Text Available There is a substantial literature on the use of linkage disequilibrium (LD to estimate effective population size using unlinked loci. The Ne estimates are extremely sensitive to the sampling process, and there is currently no theory to cope with the possible biases. We derive formulae for the analysis of idealised populations mating at random with multi-allelic (microsatellite loci. The 'Burrows composite index' is introduced in a novel way with a 'composite haplotype table'. We show that in a sample of diploid size S, the mean value of x2 or r2 from the composite haplotype table is biased by a factor of 1-1/(2S-12, rather than the usual factor 1+1/(2S-1 for a conventional haplotype table. But analysis of population data using these formulae leads to Ne estimates that are unrealistically low. We provide theory and simulation to show that this bias towards low Ne estimates is due to null alleles, and introduce a randomised permutation correction to compensate for the bias. We also consider the effect of introducing a within-locus disequilibrium factor to r2, and find that this factor leads to a bias in the Ne estimate. However this bias can be overcome using the same randomised permutation correction, to yield an altered r2 with lower variance than the original r2, and one that is also insensitive to null alleles. The resulting formulae are used to provide Ne estimates on 40 samples of the Queensland fruit fly, Bactrocera tryoni, from populations with widely divergent Ne expectations. Linkage relationships are known for most of the microsatellite loci in this species. We find that there is little difference in the estimated Ne values from using known unlinked loci as compared to using all loci, which is important for conservation studies where linkage relationships are unknown.
Sved, John A; Cameron, Emilie C; Gilchrist, A Stuart
2013-01-01
There is a substantial literature on the use of linkage disequilibrium (LD) to estimate effective population size using unlinked loci. The Ne estimates are extremely sensitive to the sampling process, and there is currently no theory to cope with the possible biases. We derive formulae for the analysis of idealised populations mating at random with multi-allelic (microsatellite) loci. The 'Burrows composite index' is introduced in a novel way with a 'composite haplotype table'. We show that in a sample of diploid size S, the mean value of x2 or r2 from the composite haplotype table is biased by a factor of 1-1/(2S-1)2, rather than the usual factor 1+1/(2S-1) for a conventional haplotype table. But analysis of population data using these formulae leads to Ne estimates that are unrealistically low. We provide theory and simulation to show that this bias towards low Ne estimates is due to null alleles, and introduce a randomised permutation correction to compensate for the bias. We also consider the effect of introducing a within-locus disequilibrium factor to r2, and find that this factor leads to a bias in the Ne estimate. However this bias can be overcome using the same randomised permutation correction, to yield an altered r2 with lower variance than the original r2, and one that is also insensitive to null alleles. The resulting formulae are used to provide Ne estimates on 40 samples of the Queensland fruit fly, Bactrocera tryoni, from populations with widely divergent Ne expectations. Linkage relationships are known for most of the microsatellite loci in this species. We find that there is little difference in the estimated Ne values from using known unlinked loci as compared to using all loci, which is important for conservation studies where linkage relationships are unknown.
Limits to the reliability of size-based fishing status estimation for data-poor stocks
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
For stocks which are considered “data-poor” no knowledge exist about growth, mortality or recruitment. The only available information is from catches. Here we examine the ability to assess the level of exploitation of a data-poor stock based only on information of the size of individuals in catches....... The model is a formulation of the classic Beverton–Holt theory in terms of size where stock parameters describing growth, natural mortality, recruitment, etc. are determined from life-history invariants. A simulation study was used to compare the reliability of assessments performed under different...... to a considerable improvement in the assessment. Overall, the simulation study demonstrates that it may be possible to classify a data-poor stock as undergoing over- or under-fishing, while the exact status, i.e., how much the fishing mortality is above or below Fmsy, can only be assessed with a substantial...
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Preliminary Estimation of Local Bypass Flow Gap Sizes for a Prismatic VHTR Core
International Nuclear Information System (INIS)
Kim, Min Hwan; Jo, Chang Keun; Lee, Won Jae
2009-01-01
The Very High Temperature Reactor (VHTR) has been selected for the Nuclear Hydrogen Development and Demonstration (NHDD) project. In the VHTR design, core bypass flow has been one of key issues for core thermal margins and target temperature of the core outlet. The core bypass flow in the prismatic VHTR varies with the core life due to the irradiation shrinkage/ swelling and thermal expansion of the graphite blocks, which could be a significant proportion of the total core flow. Thus, accurate prediction of the bypass flow is of major importance in assuring the core thermal margin. To predict the bypass flow, first of all, local gap sizes between graphite blocks in the core should be determined. The objectives of this work are to develop a methodology for determining the gap sizes and to perform a preliminary evaluation for a reference reactor
How do we estimate the relative size of human figures when seen on a photography
Czech Academy of Sciences Publication Activity Database
Šimeček, Michal; Šikl, Radovan
2011-01-01
Roč. 40, Suppl. (2011), s. 119-119 ISSN 0301-0066. [European Conference on Visual Perception /34./. 28.08.2011-01.09.2011, Toulouse] R&D Projects: GA ČR GPP407/10/P566 Institutional research plan: CEZ:AV0Z70250504 Keywords : visual space perception * size constancy * subjective horizon Subject RIV: AN - Psychology http://www. perception web.com/abstract.cgi?id=v110465
Recursive estimation of the claim rates and sizes in an insurance model
Directory of Open Access Journals (Sweden)
Lakhdar Aggoun
2004-01-01
Full Text Available It is a common fact that for most classes of general insurance, many possible sources of heterogeneity of risk exist. Premium rates based on information from a heterogeneous portfolio might be quite inadequate. One way of reducing this danger is by grouping policies according to the different levels of the various risk factors involved. Using measure change techniques, we derive recursive filters and predictors for the claim rates and claim sizes for the different groups.
Czech Academy of Sciences Publication Activity Database
Řehoř, Ivan; Cígler, Petr
2014-01-01
Roč. 46, Jun (2014), s. 21-24 ISSN 0925-9635 R&D Projects: GA ČR GAP108/12/0640; GA MŠk(CZ) LH11027 Grant - others:OPPK(CZ) CZ.2.16/3.1.00/24016 Institutional support: RVO:61388963 Keywords : TEM * nanoparticles * nanodiamonds * size distribution * high-pressure high-temperature * image analysis Subject RIV: CC - Organic Chemistry Impact factor: 1.919, year: 2014
Kinematic vorticity number – a tool for estimating vortex sizes and circulations
Directory of Open Access Journals (Sweden)
Lisa Schielicke
2016-02-01
Full Text Available The influence of extratropical vortices on a global scale is mainly characterised by their size and by the magnitude of their circulation. However, the determination of these properties is still a great challenge since a vortex has no clear delimitations but is part of the flow field itself. In this work, we introduce a kinematic vortex size determination method based on the kinematic vorticity number Wk to atmospheric flows. Wk relates the local rate-of-rotation to the local rate-of-deformation at every point in the field and a vortex core is identified as a simply connected region where the rotation prevails over the deformation. Additionally, considering the sign of vorticity in the extended Wk-method allows to identify highs and lows in different vertical layers of the atmosphere and to study vertical as well as horizontal vortex interactions. We will test the Wk-method in different idealised -D (superposition of two lows/low and jet and real -D flow situations (winter storm affecting Europe and compare the results with traditional methods based on the pressure and the vorticity fields. In comparison to these traditional methods, the Wk-method is able to extract vortex core sizes even in shear-dominated regions that occur frequently in the upper troposphere. Furthermore, statistics of the size and circulation distributions of cyclones will be given. Since the Wk-method identifies vortex cores, the identified radii are subsynoptic with a broad peak around 300–500 km at the 1000 hPa level. However, the total circulating area is not only restricted to the core. In general, circulations are in the order of 107 m2/s with only a few cyclones in the order of 108 m2/s.
Optimizing cone beam CT scatter estimation in egs_cbct for a clinical and virtual chest phantom
DEFF Research Database (Denmark)
Slot Thing, Rune; Mainegra-Hing, Ernesto
2014-01-01
improving techniques (EITs) implemented inegs_cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. RESULTS: The optimal EIT parameter selection depends...... reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. CONCLUSIONS: Using the built-in EITs inegs_cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input...
Accuracy of repeated kidney size estimation by ultrasonography and urography in children
International Nuclear Information System (INIS)
Hederstroem, E.; Forsberg, L.
1985-01-01
The accuracy of repeated sonographic and urographic kidney length measurements in kidney size evaluation was investigated in 80 children 0 to 14 years of age, mean age 4.5 years. At sonography 250 kidney lengths were compared. A difference of 0 to 1.0 cm in repeated length measurement was considered to be good accuracy and 94 per cent of right and 96 per cent of left kidney length were found within this interval-a better result than for urography with 76 per cent of repeated right kidney and 79 per cent of kidney lengths within the same interval (94 lengths). Both methods display a variation of kidney lengths which may lead to under- and overestimation of kidney size and growth. The investigation thus indicates good accuracy for repeated sonographic kidney size assessment which should be repeated often enough to estabilish a growth chart displaying the trend rather than rely too much on single measurements. Sonography can be highly recommended as a convenient and harmless alternative to urography. (orig.)
[Estimates of the size of inhibitory areas in crowding effects in periphery].
Bondarko, V M; Danilova, M V; Solnushkin, S D; Chikhman, V N
2014-01-01
In psychophysical experiments we studied how surround influences recognition of test objects. The tests were low-contrast Landolt rings of the size 1.1, 1.5 and 2.3 deg. Their centers were located at 13.2 deg from the fixation point. The additional objects were similar Landolt rings or rings without gaps. The distance between the centers of the test and the additional objects varied from 2.2 to 13.2 deg. Inone experiment, the task of the observer was to identify both the test objects and the surrounding objects. In the second experiment the stimulus layout was the same, but'identification of only the test stimulus was required. In both experiments, deterioration of performance was found at all distances between the test objects and the surround, but the deterioration was more significant when the observer carried out the dual task. The data showed that the size of the inhibitory areas in our case does not comply with the Bouma low which states that the size of the interaction areas are equal to half of the eccentricity where the test is presented. Further deterioration of performance in the dual task reveals the contribution of attention into peripheral crowding effects.
The Importance of Particle Size in Estimating Downwind Contamination from an RDD
International Nuclear Information System (INIS)
Bauer, T.
2007-01-01
There is general agreement that realistic quantities of radiological material released from a radiological dispersal device (RDD) will not travel more than hundred meters at toxic levels. Of greater concern in the case of such an incident is the size of the area contaminated with radiological particles. Remediation of contaminated areas will require either removal of the deposited articles or disposal of the contaminated materials. Contours of expected contaminated areas have been presented which extend more than 10 miles downwind of the release location. It would be impossible to remediate such a large area, so the likely response will be to permanently seal most of it off from further use. Not only are these radiation contours below levels of concern, the particle size assumed is unreasonably low, especially when the density of radioactive materials is considered. Using of appropriate RDD characterization and range of particle size, this presentation will show that expected contamination areas should be small enough to make remediation feasible.(author)
Double hard scattering without double counting
Energy Technology Data Exchange (ETDEWEB)
Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2017-02-15
Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.
Double hard scattering without double counting
International Nuclear Information System (INIS)
Diehl, Markus; Gaunt, Jonathan R.
2017-02-01
Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different
Estimation of Body Weight from Body Size Measurements and Body Condition Scores in Dairy Cows
DEFF Research Database (Denmark)
Enevoldsen, Carsten; Kristensen, T.
1997-01-01
, and body condition score were consistently associated with BW. The coefficients of multiple determination varied from 80 to 89%. The number of significant terms and the parameter estimates of the models differed markedly among groups of cows. Apparently, these differences were due to breed and feeding...... regimen. Results from this study indicate that a reliable model for estimating BW of very different dairy cows maintained in a wide range of environments can be developed using body condition score, demographic information, and measurements of hip height and hip width. However, for management purposes......The objective of this study was to evaluate the use of hip height and width, body condition score, and relevant demographic information to predict body weight (BW) of dairy cows. Seven regression models were developed from data from 972 observations of 554 cows. Parity, hip height, hip width...
Non-working nurses in Japan: estimated size and its age-cohort characteristics.
Nakata, Yoshifumi; Miyazaki, Satoru
2008-12-01
This paper aims to forecast the total number of non-working nursing staff in Japan both overall and in terms of separate age groups for assistant nurses and fully qualified nurses. This also examines policy implications of those forecasts. Although the existence of around 550,000 of non-working nursing staff has been announced, the actual number of non-working nurses is not so clear that we might make errors in making policy to meet nurse workforce demand and supply in Japan. Estimations by integrating various data on the quantitative characteristics of non-working nursing staff were carried out. Considering the length and the type of education or training in referred four nursing positions; registered nurses, assistant nurses, public health nurses and midwives, we first estimated the number of students who completed a full course. And then multiplying by the ratio for gender and age classifications at the time of entry into courses, the number of those who obtained licenses was estimated. The number of non-working nurses was estimated at 100,000 higher than those in 2005 by government. Looking at age group, it is also possible to see a strong reflection of an employment pattern that follows the life cycle of female workers. Further analysis of life cycle effects and cohort effects proved the effect of life cycles even when subtracting the differences between the working behaviours of different generations. Our findings strongly suggest the need to provide an urgent policy that workplace conditions can be created in which a balance between work and family is achievable. Moreover, to empower clinical activity, we also believe there is an urgent need to reexamine the overall career vision for assistant nurses including in terms of compensation. Relevance to clinical practice. Our findings strongly suggests that consideration for work-life balance of nursing staff; particularly, female staff is all the more important to provide a stable quality care.
Estimating the size of the homeless adolescent population across seven cities in Cambodia
Directory of Open Access Journals (Sweden)
Lindsay Stark
2017-01-01
Full Text Available Abstract Background The Government of Cambodia has committed to supporting family care for vulnerable children, including homeless populations. Collecting baseline data on the numbers and characteristics of homeless adolescents was prioritized to illuminate the scope of the issue, mobilize resources and direct the response. Methods Administrative zones across seven cities were purposively selected to cover the main urban areas known to have homeless populations in Cambodia. A complete enumeration of homeless individuals between the ages of 13 and 17 was attempted in the selected areas. In addition, a second independent count was conducted to enable a statistical estimation of completeness based on overlap across counts. This technique is known as capture-recapture. Adolescents were also interviewed about their schooling, health and other circumstances. Results After adjustment by the capture-recapture corrective multipliers (range: 3.53 -27.08, the study yielded an estimate of 2,697 13–17 year old homeless adolescents across all seven cities. The total number of homeless boys counted was significantly greater than homeless girls, especially in older ages. Conclusions To the authors’ knowledge, this is the first time capture-recapture methods have been applied to a homeless estimation of this scale in a resource-limited setting. Findings suggest the number of homeless adolescents in Cambodia is much greater than one would expect if relying on single count data alone and that this population faces many hardships.
Thomas, Kevin V; Amador, Arturo; Baz-Lomba, Jose Antonio; Reid, Malcolm
2017-10-03
Wastewater-based epidemiology is an established approach for quantifying community drug use and has recently been applied to estimate population exposure to contaminants such as pesticides and phthalate plasticizers. A major source of uncertainty in the population weighted biomarker loads generated is related to estimating the number of people present in a sewer catchment at the time of sample collection. Here, the population quantified from mobile device-based population activity patterns was used to provide dynamic population normalized loads of illicit drugs and pharmaceuticals during a known period of high net fluctuation in the catchment population. Mobile device-based population activity patterns have for the first time quantified the high degree of intraday, week, and month variability within a specific sewer catchment. Dynamic population normalization showed that per capita pharmaceutical use remained unchanged during the period when static normalization would have indicated an average reduction of up to 31%. Per capita illicit drug use increased significantly during the monitoring period, an observation that was only possible to measure using dynamic population normalization. The study quantitatively confirms previous assessments that population estimates can account for uncertainties of up to 55% in static normalized data. Mobile device-based population activity patterns allow for dynamic normalization that yields much improved temporal and spatial trend analysis.
Ndayongeje, Joel; Msami, Amani; Laurent, Yovin Ivo; Mwankemwa, Syangu; Makumbuli, Moza; Ngonyani, Alois M; Tiberio, Jenny; Welty, Susie; Said, Christen; Morris, Meghan D; McFarland, Willi
2018-02-12
We mapped hot spots and estimated the numbers of people who use drugs (PWUD) and who inject drugs (PWID) in 12 regions of Tanzania. Primary (ie, current and past PWUD) and secondary (eg, police, service providers) key informants identified potential hot spots, which we visited to verify and count the number of PWUD and PWID present. Adjustments to counts and extrapolation to regional estimates were done by local experts through iterative rounds of discussion. Drug use, specifically cocaine and heroin, occurred in all regions. Tanga had the largest numbers of PWUD and PWID (5190 and 540, respectively), followed by Mwanza (3300 and 300, respectively). Findings highlight the need to strengthen awareness of drug use and develop prevention and harm reduction programs with broader reach in Tanzania. This exercise provides a foundation for understanding the extent and locations of drug use, a baseline for future size estimations, and a sampling frame for future research.
Remote estimation of crown size and tree density in snowy areas
Kishi, R.; Ito, A.; Kamada, K.; Fukita, T.; Lahrita, L.; Kawase, Y.; Murahashi, K.; Kawamata, H.; Naruse, N.; Takahashi, Y.
2017-12-01
Precise estimation of tree density in the forest leads us to understand the amount of carbon dioxide fixed by plants. Aerial photographs have been used to measure the number of trees. Campaign using aircraft, however, is expensive ( $50,000/1 campaign flight) and the research area is limited in drone. In addition, previous studies estimating the density of trees from aerial photographs have been performed in the summer, so there was a gap of 15% in the estimation due to the overlapping of the leaves. Here, we have proposed a method to accurately estimate the number of forest trees from the satellite images of snow-covered deciduous forest area, using the ratio of branches to snow. The advantages of our method are as follows; 1) snow area could be excluded easily due to the high reflectance, 2) tree branches are small overlapping compared to leaves. Although our method can use only in the snowfall region, the area covered with snow in the world becomes more than 12,800,000 km2. Our proposition should play an important role in discussing global warming. As a test area, we have chosen the forest near Mt. Amano in Iwate prefecture in Japan. First, we made a new index of (Band1-Band5)/(Band1+Band5), which will be suitable to distinguish between the snow and the tree trunk using the corresponding spectral reflection data. Next, the index values of changing the ratio in 1% increments were listed. From the satellite image analysis at 4 points, the ratio of snow to tree trunk showed the following values, I:61%, II:65%, III:66% and IV:65%. To confirm the estimation, we used the aerial photograph from Google earth; the rate was I:42.05%, II:48.89%, III:50.64%, IV:49.05%, respectively. There is a correlation between the numerical values of both, but there are differences. We will discuss in detail at this point, focusing on the effect of shadows.
Energy Technology Data Exchange (ETDEWEB)
Schutte, R.; Thompson, G.R.; Donkor, K.K. [New Caledonia College, Prince George, BC (Canada). Dept. of Chemistry; Duke, M.J.M. [Alberta Univ., Edmonton, AB (Canada). SLOWPOKE Nuclear Reactor Facility; Cowles, R. [Syncrude Canada, Edmonton, AB (Canada); Li, X.P.; Kratochvil, B. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemistry
1999-10-01
Knowledge concerning the particle size distribution (PSD) of oil sands is necessary for optimal extraction of bitumen from the sand, and it indicates ore quality, gives a measure of process performance during bitumen extraction, and yields information useful for tailings management. Oil sands with mainly coarse particulates are usually bitumen rich and easy to process in the conventional hot water extraction process. These ores do not require the addition of sodium hydroxide as a process aid, and tailings volumes are minimal in contrast to high fines oil sands. Compared to the methods currently in use for determining the PSD in the oil sand industry, a method is described that is rapid, simple to carry out, and does not involve the use of organic solvents with attendant disposal problems. The principle behind the method is the development of a set of correlations by applying regression analysis to a large set of PSD and elemental analysis data. Predicted PSDs compare favorably with results obtained by existing methods. Each of the three PSD methods currently in use could be simulated by the INAA method. The INAA-based model that predicts hydrometer equivalent data was only applicable above certain lower limits for the amount of the fine size fractions present because of the limited sensitivity of the hydrometer method for PSD determination of fine fractions. For all six particle sizes studied, the INAA model had lower overall uncertainty than the corresponding Microtrac and Coulter instrument methods; the instrument repeatability of the INAA fell between those of Microtrac and Coulter. For Athabasca oil sands, the INAA-based method for PSD determination at and below 44 microm afforded results comparable to current Microtrac and Coulter methods. 13 refs., 9 tabs., 2 figs.
Carvalho, Pedro; Marques, Rui Cunha
2016-02-15
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating required information size by quantifying diversity in random-effects model meta-analyses
DEFF Research Database (Denmark)
Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper
2009-01-01
an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...
Exact, time-independent estimation of clone size distributions in normal and mutated cells.
Roshan, A; Jones, P H; Greenman, C D
2014-10-06
Biological tools such as genetic lineage tracing, three-dimensional confocal microscopy and next-generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Understanding population-wide clone size distributions in vivo is complicated by multiple cell types within observed tissues, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parametrize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors, this takes the form of a gambler's ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, alternative, exact, formulations of classic Luria-Delbrück-type problems emerge. This approach can be extended beyond neutral models of mutant clonal evolution. Applications of these approaches are twofold. First, we resolve the probability of progenitor cells generating proliferating or differentiating progeny in clonal lineage tracing experiments in vivo or cell culture assays where clone age is not known. Second, we model mutation frequency distributions that deep sequencing of subclonal samples produce.
Mölbert, Simone Claire; Klein, Lukas; Thaler, Anne; Mohler, Betty J; Brozzo, Chiara; Martus, Peter; Karnath, Hans-Otto; Zipfel, Stephan; Giel, Katrin Elisabeth
2017-11-01
A distorted representation of one's own body is a diagnostic criterion and core psychopathology of both anorexia nervosa (AN) and bulimia nervosa (BN). Despite recent technical advances in research, it is still unknown whether this body image disturbance is characterized by body dissatisfaction and a low ideal weight and/or includes a distorted perception or processing of body size. In this article, we provide an update and meta-analysis of 42 articles summarizing measures and results for body size estimation (BSE) from 926 individuals with AN, 536 individuals with BN and 1920 controls. We replicate findings that individuals with AN and BN overestimate their body size as compared to controls (ES=0.63). Our meta-regression shows that metric methods (BSE by direct or indirect spatial measures) yield larger effect sizes than depictive methods (BSE by evaluating distorted pictures), and that effect sizes are larger for patients with BN than for patients with AN. To interpret these results, we suggest a revised theoretical framework for BSE that accounts for differences between depictive and metric BSE methods regarding the underlying body representations (conceptual vs. perceptual, implicit vs. explicit). We also discuss clinical implications and argue for the importance of multimethod approaches to investigate body image disturbance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chlamydia sequelae cost estimates used in current economic evaluations: does one-size-fit-all?
Ong, Koh Jun; Soldan, Kate; Jit, Mark; Dunbar, J Kevin; Woodhall, Sarah C
2017-02-01
Current evidence suggests that chlamydia screening programmes can be cost-effective, conditional on assumptions within mathematical models. We explored differences in cost estimates used in published economic evaluations of chlamydia screening from seven countries (four papers each from UK and the Netherlands, two each from Sweden and Australia, and one each from Ireland, Canada and Denmark). From these studies, we extracted management cost estimates for seven major chlamydia sequelae. In order to compare the influence of different sequelae considered in each paper and their corresponding management costs on the total cost per case of untreated chlamydia, we applied reported unit sequelae management costs considered in each paper to a set of untreated infection to sequela progression probabilities. All costs were adjusted to 2013/2014 Great British Pound (GBP) values. Sequelae management costs ranged from £171 to £3635 (pelvic inflammatory disease); £953 to £3615 (ectopic pregnancy); £546 to £6752 (tubal factor infertility); £159 to £3341 (chronic pelvic pain); £22 to £1008 (epididymitis); £11 to £1459 (neonatal conjunctivitis) and £433 to £3992 (neonatal pneumonia). Total cost of sequelae per case of untreated chlamydia ranged from £37 to £412. There was substantial variation in cost per case of chlamydia sequelae used in published chlamydia screening economic evaluations, which likely arose from different assumptions about disease management pathways and the country perspectives taken. In light of this, when interpreting these studies, the reader should be satisfied that the cost estimates used sufficiently reflect the perspective taken and current disease management for their respective context. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Pai, Shantaram S.; Hoge, Peter A.; Patel, B. M.; Nagpal, Vinod K.
2009-01-01
The primary structure of the Ares I-X Upper Stage Simulator (USS) launch vehicle is constructed of welded mild steel plates. There is some concern over the possibility of structural failure due to welding flaws. It was considered critical to quantify the impact of uncertainties in residual stress, material porosity, applied loads, and material and crack growth properties on the reliability of the welds during its pre-flight and flight. A criterion--an existing maximum size crack at the weld toe must be smaller than the maximum allowable flaw size--was established to estimate the reliability of the welds. A spectrum of maximum allowable flaw sizes was developed for different possible combinations of all of the above listed variables by performing probabilistic crack growth analyses using the ANSYS finite element analysis code in conjunction with the NASGRO crack growth code. Two alternative methods were used to account for residual stresses: (1) The mean residual stress was assumed to be 41 ksi and a limit was set on the net section flow stress during crack propagation. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if this limit was exceeded during four complete flight cycles, and (2) The mean residual stress was assumed to be 49.6 ksi (the parent material s yield strength) and the net section flow stress limit was ignored. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if catastrophic crack growth occurred during four complete flight cycles. Both surface-crack models and through-crack models were utilized to characterize cracks in the weld toe.
Trask, Amanda E; Bignal, Eric M; McCracken, Davy I; Piertney, Stuart B; Reid, Jane M
2017-09-01
A population's effective size (N e ) is a key parameter that shapes rates of inbreeding and loss of genetic diversity, thereby influencing evolutionary processes and population viability. However, estimating N e , and identifying key demographic mechanisms that underlie the N e to census population size (N) ratio, remains challenging, especially for small populations with overlapping generations and substantial environmental and demographic stochasticity and hence dynamic age-structure. A sophisticated demographic method of estimating N e /N, which uses Fisher's reproductive value to account for dynamic age-structure, has been formulated. However, this method requires detailed individual- and population-level data on sex- and age-specific reproduction and survival, and has rarely been implemented. Here, we use the reproductive value method and detailed demographic data to estimate N e /N for a small and apparently isolated red-billed chough (Pyrrhocorax pyrrhocorax) population of high conservation concern. We additionally calculated two single-sample molecular genetic estimates of N e to corroborate the demographic estimate and examine evidence for unobserved immigration and gene flow. The demographic estimate of N e /N was 0.21, reflecting a high total demographic variance (σ2dg) of 0.71. Females and males made similar overall contributions to σ2dg. However, contributions varied among sex-age classes, with greater contributions from 3 year-old females than males, but greater contributions from ≥5 year-old males than females. The demographic estimate of N e was ~30, suggesting that rates of increase of inbreeding and loss of genetic variation per generation will be relatively high. Molecular genetic estimates of N e computed from linkage disequilibrium and approximate Bayesian computation were approximately 50 and 30, respectively, providing no evidence of substantial unobserved immigration which could bias demographic estimates of N e . Our analyses identify
A study on the position estimation and recovery of a small-sized mobile robot
International Nuclear Information System (INIS)
Kim, Jae Hwan
1994-02-01
Position estimation capability of an autonomous mobile robot is important for a correct path tracking as well as for a complete navigation in a given environment. This paper describes the system with which the robot can estimate the current position and orientation without perceiving its any outer environments or processing vision image which requires much computational load. The designed system is new and simple. It detects wheel slippage, the main cause of navigational error, and makes it possible to recover from its strayed position. The designed system is composed of an encoder on a non-driven castor, an encoded compass disc as an absolute reference frame, two laser-diodes units with photosensors, and some pertinent data processing hardware and software. An encoded compass disc has two-track codes along its outer perimeter, which give the information on the amount of rotation as well as the direction of rotation in case when slip occurs, and gives the information on the exact turning angles to a mobile robot. The experimental results show that the designed system detects wheel slippage and recovers the robot from its strayed position very well
Directory of Open Access Journals (Sweden)
Orlando N. Grillo
2011-03-01
Full Text Available Missing data is a common problem in paleontology. It makes it difficult to reconstruct extinct taxa accurately and restrains the inclusion of some taxa on comparative and biomechanical studies. Particularly, estimating the position of vertebrae on incomplete series is often non-empirical and does not allow precise estimation of missing parts. In this work we present a method for calculating the position of preserved middle sequences of caudal vertebrae in the saurischian dinosaur Staurikosaurus pricei, based on the length and height of preserved anterior and posterior caudal vertebral centra. Regression equations were used to estimate these dimensions for middle vertebrae and, consequently, to assess the position of the preserved middle sequences. It also allowed estimating these dimensions for non-preserved vertebrae. Results indicate that the preserved caudal vertebrae of Staurikosaurus may correspond to positions 1-3, 5, 7, 14-19/15-20, 24-25/25-26, and 29-47, and that at least 25 vertebrae had transverse processes. Total length of the tail was estimated in 134 cm and total body length was 220-225 cm.Dados lacunares são um problema comum na paleontologia. Eles dificultam a reconstrução acurada de táxons extintos e limitam a inclusão de alguns táxons em estudos comparativose biomecânicos. Particularmente, estimar a posição de vértebras em séries incompletas tem sido feito com base em métodos não empíricos que não permitem estimar corretamente as partes ausentes. Neste trabalho apresentamos uma metodologia que permite estimar a posição de sequências médias preservadas de vértebras caudais no dinossauro saurísquio Staurikosaurus pricei, com base no comprimento e altura dos centros das vértebras anteriores e posteriores preservadas. Equações de regressão foram usadas para estimar essas dimensões para as vértebras médias e, consequentemente, para posicionar as sequências médias preservadas e para estimar o tamanho das
Molecular size estimation of plasma membrane β-glucan synthase from red beet root
International Nuclear Information System (INIS)
Sloan, M.E.; Eiberger, L.L.; Wasserman, B.P.
1986-01-01
Cellulose and cell wall β-D-glucans in higher plants are thought to be synthesized by the plasma membrane enzyme, β-glucan synthase. This enzyme has never been purified to homogeneity, hence its subunit composition is unknown. Partial purification of red beet root glucan synthase by glycerol density gradient centrifugation followed by SDS-PAGE yielded a highly enriched subunit of 68 kDa. Radiation inactivation of plasma membranes gave a molecular size the 450 kDa for the holoenzyme complex. This suggests that glucan synthase consists of 6 to 7 subunits and confirms electron microscope studies showing that glucan synthases exist as multi-subunit complexes embedded within the membrane
Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods
Kamimoto, Takeyuki
2006-07-01
Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed.
Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods
International Nuclear Information System (INIS)
Kamimoto, Takeyuki
2006-01-01
Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed
Directory of Open Access Journals (Sweden)
Deborah P. Shutt
2017-12-01
Full Text Available As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.
Shutt, Deborah P; Manore, Carrie A; Pankavich, Stephen; Porter, Aaron T; Del Valle, Sara Y
2017-12-01
As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Kamath, Pauline L; Haroldson, Mark A; Luikart, Gordon; Paetkau, David; Whitman, Craig; van Manen, Frank T
2015-11-01
Effective population size (N(e)) is a key parameter for monitoring the genetic health of threatened populations because it reflects a population's evolutionary potential and risk of extinction due to genetic stochasticity. However, its application to wildlife monitoring has been limited because it is difficult to measure in natural populations. The isolated and well-studied population of grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem provides a rare opportunity to examine the usefulness of different N(e) estimators for monitoring. We genotyped 729 Yellowstone grizzly bears using 20 microsatellites and applied three single-sample estimators to examine contemporary trends in generation interval (GI), effective number of breeders (N(b)) and N(e) during 1982-2007. We also used multisample methods to estimate variance (N(eV)) and inbreeding N(e) (N(eI)). Single-sample estimates revealed positive trajectories, with over a fourfold increase in N(e) (≈100 to 450) and near doubling of the GI (≈8 to 14) from the 1980s to 2000s. N(eV) (240-319) and N(eI) (256) were comparable with the harmonic mean single-sample N(e) (213) over the time period. Reanalysing historical data, we found N(eV) increased from ≈80 in the 1910s-1960s to ≈280 in the contemporary population. The estimated ratio of effective to total census size (N(e) /N(c)) was stable and high (0.42-0.66) compared to previous brown bear studies. These results support independent demographic evidence for Yellowstone grizzly bear population growth since the 1980s. They further demonstrate how genetic monitoring of N(e) can complement demographic-based monitoring of N(c) and vital rates, providing a valuable tool for wildlife managers. © 2015 John Wiley & Sons Ltd.
Kamath, Pauline L.; Haroldson, Mark A.; Luikart, Gordon; Paetkau, David; Whitman, Craig L.; van Manen, Frank T.
2015-01-01
Effective population size (Ne) is a key parameter for monitoring the genetic health of threatened populations because it reflects a population's evolutionary potential and risk of extinction due to genetic stochasticity. However, its application to wildlife monitoring has been limited because it is difficult to measure in natural populations. The isolated and well-studied population of grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem provides a rare opportunity to examine the usefulness of different Ne estimators for monitoring. We genotyped 729 Yellowstone grizzly bears using 20 microsatellites and applied three single-sample estimators to examine contemporary trends in generation interval (GI), effective number of breeders (Nb) and Ne during 1982–2007. We also used multisample methods to estimate variance (NeV) and inbreeding Ne (NeI). Single-sample estimates revealed positive trajectories, with over a fourfold increase in Ne (≈100 to 450) and near doubling of the GI (≈8 to 14) from the 1980s to 2000s. NeV (240–319) and NeI (256) were comparable with the harmonic mean single-sample Ne (213) over the time period. Reanalysing historical data, we found NeV increased from ≈80 in the 1910s–1960s to ≈280 in the contemporary population. The estimated ratio of effective to total census size (Ne/Nc) was stable and high (0.42–0.66) compared to previous brown bear studies. These results support independent demographic evidence for Yellowstone grizzly bear population growth since the 1980s. They further demonstrate how genetic monitoring of Ne can complement demographic-based monitoring of Nc and vital rates, providing a valuable tool for wildlife managers.
Fleet size estimation for spreading operation considering road geometry, weather and traffic
Directory of Open Access Journals (Sweden)
Steven I-Jy Chien
2014-02-01
Full Text Available Extreme weather conditions(i.e. snow storm in winter time have caused significant travel disruptions and increased delay and traffic accidents. Snow plowing and salt spreading are the most common counter-measures for making our roads safer for motorists. To assist highway maintenance authorities with better planning and allocation of winter maintenance resources, this study introduces an analytical model to estimate the required number of trucks for spreading operation subjective to pre-specified service time constraints considering road geometry, weather and traffic. The complexity of the research problem lies in dealing with heterogeneous road geometry of road sections, truck capacities, spreading patterns, and traffic speeds under different weather conditions and time periods of an event. The proposed model is applied to two maintenance yards with seven road sections in New Jersey (USA, which demonstrates itself fairly practical to be implemented, considering diverse operational conditions.
Gates, W. R.
1983-02-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
Gates, W. R.
1983-01-01
Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. The fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. STT R&D is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), dependng on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest.
Effect of source depth correction on the estimation of earthquake size
International Nuclear Information System (INIS)
Romanelli, F.; Panza, G.
1995-03-01
The relationship between surface wave magnitude, M s , and seismic moment, M o , of earthquakes is essential for the estimation of seismic risk in any region. In the hypothesis of constant stress drop, theoretical models predict that Log M o and M s are related by a linear law. The slope most commonly found in the literature is around 1.5. Here we show that the application to the Ms values of the necessary correction for the focal depth, gives a general increment of the correlation coefficient, and that a slope around 1.0 is consistent with the global data, while for regionalized data it can vary from about 1.0 to 2.0. (author). 14 refs, 3 tabs
Effect of source depth correction on the estimation of earthquake size
Energy Technology Data Exchange (ETDEWEB)
Romanelli, F [Universita degli Studi di Trieste, Trieste (Italy). Istituto di Geodesia e Geofisica; Panza, G
1995-03-01
The relationship between surface wave magnitude, M{sub s}, and seismic moment, M{sub o}, of earthquakes is essential for the estimation of seismic risk in any region. In the hypothesis of constant stress drop, theoretical models predict that Log M{sub o} and M{sub s} are related by a linear law. The slope most commonly found in the literature is around 1.5. Here we show that the application to the Ms values of the necessary correction for the focal depth, gives a general increment of the correlation coefficient, and that a slope around 1.0 is consistent with the global data, while for regionalized data it can vary from about 1.0 to 2.0. (author). 14 refs, 3 tabs.
Estimating the Population Size and Genetic Diversity of Amur Tigers in Northeast China.
Directory of Open Access Journals (Sweden)
Hailong Dou
Full Text Available Over the past century, the endangered Amur tiger (Panthera tigris altaica has experienced a severe contraction in demography and geographic range because of habitat loss, poaching, and prey depletion. In its historical home in Northeast China, there appears to be a single tiger population that includes tigers in Southwest Primorye and Northeast China; however, the current demographic status of this population is uncertain. Information on the abundance, distribution and genetic diversity of this population for assessing the efficacy of conservation interventions are scarce. We used noninvasive genetic detection data from scats, capture-recapture models and an accumulation curve method to estimate the abundance of Amur tigers in Northeast China. We identified 11 individual tigers (6 females and 5 males using 10 microsatellite loci in three nature reserves between April 2013 and May 2015. These tigers are confined primarily to a Hunchun Nature Reserve along the border with Russia, with an estimated population abundance of 9-11 tigers during the winter of 2014-2015. They showed a low level of genetic diversity. The mean number of alleles per locus was 2.60 and expected and observed heterozygosity were 0.42 and 0.49, respectively. We also documented long-distance dispersal (~270 km of a male Amur tiger to Huangnihe Nature Reserve from the border, suggesting that the expansion of neighboring Russian populations may eventually help sustain Chinese populations. However, the small and isolated population recorded by this study demonstrate that there is an urgent need for more intensive regional management to create a tiger-permeable landscape and increased genetic connectivity with other populations.
Estimating the Population Size and Genetic Diversity of Amur Tigers in Northeast China.
Dou, Hailong; Yang, Haitao; Feng, Limin; Mou, Pu; Wang, Tianming; Ge, Jianping
2016-01-01
Over the past century, the endangered Amur tiger (Panthera tigris altaica) has experienced a severe contraction in demography and geographic range because of habitat loss, poaching, and prey depletion. In its historical home in Northeast China, there appears to be a single tiger population that includes tigers in Southwest Primorye and Northeast China; however, the current demographic status of this population is uncertain. Information on the abundance, distribution and genetic diversity of this population for assessing the efficacy of conservation interventions are scarce. We used noninvasive genetic detection data from scats, capture-recapture models and an accumulation curve method to estimate the abundance of Amur tigers in Northeast China. We identified 11 individual tigers (6 females and 5 males) using 10 microsatellite loci in three nature reserves between April 2013 and May 2015. These tigers are confined primarily to a Hunchun Nature Reserve along the border with Russia, with an estimated population abundance of 9-11 tigers during the winter of 2014-2015. They showed a low level of genetic diversity. The mean number of alleles per locus was 2.60 and expected and observed heterozygosity were 0.42 and 0.49, respectively. We also documented long-distance dispersal (~270 km) of a male Amur tiger to Huangnihe Nature Reserve from the border, suggesting that the expansion of neighboring Russian populations may eventually help sustain Chinese populations. However, the small and isolated population recorded by this study demonstrate that there is an urgent need for more intensive regional management to create a tiger-permeable landscape and increased genetic connectivity with other populations.
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2015-04-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the
International Nuclear Information System (INIS)
Yu, G.
2008-12-01
Thermonuclear fusion of light atoms is considered since decades as an unlimited, safe and reliable source of energy that could eventually replace classical sources based on fossil fuel or nuclear fuel. Fusion reactor technology and materials studies are important parts of the fusion energy development program. For the time being, the most promising materials for structural applications in the future fusion power reactors are the Reduced Activation Ferritic/Martensitic (RAFM) steels for which the greatest technology maturity has been achieved, i.e., qualified fabrication routes, welding technology and a general industrial experience are almost available. The most important issues concerning the future use of RAFM steels in fusion power reactors are derived from their irradiation by 14 MeV neutrons that are the product, together with 3.5 MeV helium ions, of the envisaged fusion reactions between deuterium and tritium nuclei. Indeed, exposure of metallic materials to intense fluxes of 14 MeV neutrons will result in the formation of severe displacement damage (about 20-30 dpa per year) and high amounts of helium, which are at the origin of significant changes in the physical and mechanical properties of materials, such as hardening and embrittlement effects. This PhD Thesis work was aimed at investigating how far the Small Angle Neutron Scattering (SANS) technique could be used for detecting and characterizing nano-sized irradiation-induced defects in RAFM steels. Indeed, the resolution limit of Transmission Electron Microscopy (TEM) is about 1 nm in weak beam TEM imaging, and it is usually thought that a large number of irradiation-induced effects have a size below 1 nm in RAFM steels and that these very small defects actually contribute to the irradiation-induced hardening and embrittlement of RAFM steels occurring at irradiation temperatures below about 400 °C. The aim of this work was achieved by combining SANS experiments on unirradiated and irradiated specimens
Petersen, Dick; Howard, Carl; Prime, Zebb
2015-02-01
This paper presents an analytical formulation of the load distribution and varying effective stiffness of a ball bearing assembly with a raceway defect of varying size, subjected to static loading in the radial, axial and rotational degrees of freedom. The analytical formulation is used to study the effect of the size of the defect on the load distribution and varying stiffness of the bearing assembly. The study considers a square-shaped outer raceway defect centered in the load zone and the bearing is loaded in the radial and axial directions while the moment loads are zero. Analysis of the load distributions shows that as the defect size increases, defect-free raceway sections are subjected to increased static loading when one or more balls completely or partly destress when positioned in the defect zone. The stiffness variations that occur when balls pass through the defect zone are significantly larger and change more rapidly at the defect entrance and exit than the stiffness variations that occur for the defect-free bearing case. These larger, more rapid stiffness variations generate parametric excitations which produce the low frequency defect entrance and exit events typically observed in the vibration response of a bearing with a square-shaped raceway defect. Analysis of the stiffness variations further shows that as the defect size increases, the mean radial stiffness decreases in the loaded radial and axial directions and increases in the unloaded radial direction. The effects of such stiffness changes on the low frequency entrance and exit events in the vibration response are simulated with a multi-body nonlinear dynamic model. Previous work used the time difference between the low frequency entrance event and the high frequency exit event to estimate the size of the defect. However, these previous defect size estimation techniques cannot distinguish between defects that differ in size by an integer number of the ball angular spacing, and a third feature
Automated estimation of abdominal effective diameter for body size normalization of CT dose.
Cheng, Phillip M
2013-06-01
Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.
Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods
Sivakami, R.; Dhanuskodi, S.; Karvembu, R.
2016-01-01
RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55 eV. BET measurements show a high specific surface area (SSA) of 118-133 m2/g and pore diameter (10-25 nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM.
Sivakami, R; Dhanuskodi, S; Karvembu, R
2016-01-05
RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55eV. BET measurements show a high specific surface area (SSA) of 118-133m(2)/g and pore diameter (10-25nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM. Copyright © 2015 Elsevier B.V. All rights reserved.
A NEW METHOD FOR ESTIMATING THE 3D SIZE-DISTRIBUTIONCURVE OF FRAGMENTED ROCKS OUT OF 2D IMAGES
Directory of Open Access Journals (Sweden)
Souhaïl Outal
2011-05-01
Full Text Available Image analysis of rock fragmentation is used in mines and quarries to control the quality of blasting. Obtained information is the particle-size-distribution curve relating volume-proportions to the sizes of fragments. Calculation by image analysis of this particle-size-distribution is carried out in several steps, and each step has its inherent limitations. We will focus in this paper on one of themost crucial steps: reconstructing the volumes (3D. For the 3D-step, we have noticed that, due to the current acquisition method, there is no correlation between the average grey level of surfaces of the fragments and their third dimension. Consequently volumes (3D as well as the sizes (1D has to be calculated indirectly from the extracted projected areas of the visible fragments of images. For this purpose, we have built in laboratory a set of images of fragmented rocks resulting from blasting. Moreover, several tests based on comparisons between image analysis and screening measurements were carried out. A new stereological method, based on the comparison of the densities of probability (histograms of the samemeasurements (with very weak covering and overlappingwas elaborated. It allows us to estimate correctly, for a given type of rock, two intrinsic laws weighing the projected areas distribution in order to predict the volumic distribution.
DEFF Research Database (Denmark)
Kærn, Martin Ryhl; Modi, Anish; Jensen, Jonas Kjær
2015-01-01
Transport properties of fluids are indispensable for heat exchanger design. The methods for estimating the transport properties of ammonia–water mixtures are not well established in the literature. The few existent methods are developed from none or limited, sometimes inconsistent experimental...... of ammonia–water mixtures. Firstly, the different methods are introduced and compared at various temperatures and pressures. Secondly, their individual influence on the required heat exchanger size (surface area) is investigated. For this purpose, two case studies related to the use of the Kalina cycle...... the interpolative methods in contrast to the corresponding state methods. Nevertheless, all possible mixture transport property combinations used herein resulted in a heat exchanger size within 4.3 % difference for the flue-gas heat recovery boiler, and within 12.3 % difference for the oil-based boiler....
Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion
Blomquist,