WorldWideScience

Sample records for accurately estimate excess

  1. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  2. Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches

    Directory of Open Access Journals (Sweden)

    Friedrich Jan O

    2009-01-01

    , although far from statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated.

  3. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  4. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  5. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  6. Binomial Distribution Sample Confidence Intervals Estimation 6. Excess Risk

    Directory of Open Access Journals (Sweden)

    Sorana BOLBOACĂ

    2004-02-01

    Full Text Available We present the problem of the confidence interval estimation for excess risk (Y/n-X/m fraction, a parameter which allows evaluating of the specificity of an association between predisposing or causal factors and disease in medical studies. The parameter is computes based on 2x2 contingency table and qualitative variables. The aim of this paper is to introduce four new methods of computing confidence intervals for excess risk called DAC, DAs, DAsC, DBinomial, and DBinomialC and to compare theirs performance with the asymptotic method called here DWald.In order to assess the methods, we use the PHP programming language and a PHP program was creates. The performance of each method for different sample sizes and different values of binomial variables were assess using a set of criterions. First, the upper and lower boundaries for a given X, Y and a specified sample size for choused methods were compute. Second, were assessed the average and standard deviation of the experimental errors, and the deviation relative to imposed significance level α = 5%. Four methods were assessed on random numbers for binomial variables and for sample sizes from 4 to 1000 domain.The experiments show that the DAC methods obtain performances in confidence intervals estimation for excess risk.

  7. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  8. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  9. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  10. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  11. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  12. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  14. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km(2) , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  15. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Science.gov (United States)

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  16. Accurate estimation of the boundaries of a structured light pattern.

    Science.gov (United States)

    Lee, Sukhan; Bui, Lam Quang

    2011-06-01

    Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture-induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.

  17. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  18. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  19. Accurate estimators of correlation functions in Fourier space

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  20. Guidelines for accurate EC50/IC50 estimation.

    Science.gov (United States)

    Sebaugh, J L

    2011-01-01

    This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.

  1. Efficient floating diffuse functions for accurate characterization of the surface-bound excess electrons in water cluster anions.

    Science.gov (United States)

    Zhang, Changzhe; Bu, Yuxiang

    2017-01-25

    In this work, the effect of diffuse function types (atom-centered diffuse functions versus floating functions and s-type versus p-type diffuse functions) on the structures and properties of three representative water cluster anions featuring a surface-bound excess electron is studied and we find that an effective combination of such two kinds of diffuse functions can not only reduce the computational cost but also, most importantly, considerably improve the accuracy of results and even avoid incorrect predictions of spectra and the EE shape. Our results indicate that (a) simple augmentation of atom-centered diffuse functions is beneficial for the vertical detachment energy convergence, but it leads to very poor descriptions for the singly occupied molecular orbital (SOMO) and lowest unoccupied molecular orbital (LUMO) distributions of the water cluster anions featuring a surface-bound excess electron and thus a significant ultraviolet spectrum redshift; (b) the ghost-atom-based floating diffuse functions can not only contribute to accurate electronic calculations of the ground state but also avoid poor and even incorrect descriptions of the SOMO and the LUMO induced by excessive augmentation of atom-centered diffuse functions; (c) the floating functions can be realized by ghost atoms and their positions could be determined through an optimization routine along the dipole moment vector direction. In addition, both the s- and p-type floating functions are necessary to supplement in the basis set which are responsible for the ground (s-type character) and excited (p-type character) states of the surface-bound excess electron, respectively. The exponents of the diffuse functions should also be determined to make the diffuse functions cover the main region of the excess electron distribution. Note that excessive augmentation of such diffuse functions is redundant and even can lead to unreasonable LUMO characteristics.

  2. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  3. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  4. Using inpainting to construct accurate cut-sky CMB estimators

    CERN Document Server

    Gruetjen, H F; Liguori, M; Shellard, E P S

    2015-01-01

    The direct evaluation of manifestly optimal, cut-sky CMB power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodised pseudo-$C_l$ (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodisation and a novel low-l leaning scheme. Providing an analytic argument why the loca...

  5. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  6. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  7. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  8. Simulation model accurately estimates total dietary iodine intake

    NARCIS (Netherlands)

    Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.

    2009-01-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p

  9. Accurate estimation of solvation free energy using polynomial fitting techniques.

    Science.gov (United States)

    Shyu, Conrad; Ytreberg, F Marty

    2011-01-15

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem, 2009, 30, 2297). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and nonequidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest that these polynomial techniques, especially with use of nonequidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. Copyright © 2010 Wiley Periodicals, Inc.

  10. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  11. Accurate estimates of solutions of second order recursions

    NARCIS (Netherlands)

    Mattheij, R.M.M.

    1975-01-01

    Two important types of two dimensional matrix-vector and second order scalar recursions are studied. Both types possess two kinds of solutions (to be called forward and backward dominant solutions). For the directions of these solutions sharp estimates are derived, from which the solutions themselve

  12. How accurate are the time delay estimates in gravitational lensing?

    CERN Document Server

    Cuevas-Tello, J C; Tino, P; Cuevas-Tello, Juan C.; Raychaudhury, Somak; Tino, Peter

    2006-01-01

    We present a novel approach to estimate the time delay between light curves of multiple images in a gravitationally lensed system, based on Kernel methods in the context of machine learning. We perform various experiments with artificially generated irregularly-sampled data sets to study the effect of the various levels of noise and the presence of gaps of various size in the monitoring data. We compare the performance of our method with various other popular methods of estimating the time delay and conclude, from experiments with artificial data, that our method is least vulnerable to missing data and irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we use our method to determine the time delays between the two images of quasar Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if only the observations at epochs common to both wavelengths are used, the time delay gives consistent estimates, which can be combined to yield 408\\pm 12 days. The full 6 cm dataset, ...

  13. Accurate Estimators of Correlation Functions in Fourier Space

    CERN Document Server

    Sefusatti, Emiliano; Scoccimarro, Roman; Couchman, Hugh

    2015-01-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on Fast Fourier Transforms (FFTs), which are affected by aliasing from unresolved small scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per-cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher-order interpolation kernels than the standard Cloud in Cell a...

  14. Simulation model accurately estimates total dietary iodine intake.

    Science.gov (United States)

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  15. Accurate determination of phase arrival times using autoregressive likelihood estimation

    Directory of Open Access Journals (Sweden)

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  16. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  17. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  18. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005.

    Science.gov (United States)

    Foppa, Ivo M; Hossain, Md Monir

    2008-12-30

    Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories (< 18, 18-49, 50-64, 65+). Bayesian parameter estimation was performed using Markov Chain Monte Carlo methods. For the eleven year study period, a total of 260,814 (95% CI: 201,011-290,556) deaths was attributed to influenza, corresponding to an annual average of 23,710, or 0.91% of all deaths. Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.

  19. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    Directory of Open Access Journals (Sweden)

    Hossain Md Monir

    2008-12-01

    Full Text Available Abstract Background Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. Methods and Results U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories ( Conclusion Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.

  20. [Estimation of the excess of lung cancer mortality risk associated to environmental tobacco smoke exposure of hospitality workers].

    Science.gov (United States)

    López, M José; Nebot, Manel; Juárez, Olga; Ariza, Carles; Salles, Joan; Serrahima, Eulàlia

    2006-01-14

    To estimate the excess lung cancer mortality risk associated with environmental tobacco (ETS) smoke exposure among hospitality workers. The estimation was done using objective measures in several hospitality settings in Barcelona. Vapour phase nicotine was measured in several hospitality settings. These measurements were used to estimate the excess lung cancer mortality risk associated with ETS exposure for a 40 year working life, using the formula developed by Repace and Lowrey. Excess lung cancer mortality risk associated with ETS exposure was higher than 145 deaths per 100,000 workers in all places studied, except for cafeterias in hospitals, where excess lung cancer mortality risk was 22 per 100,000. In discoteques, for comparison, excess lung cancer mortality risk is 1,733 deaths per 100,000 workers. Hospitality workers are exposed to ETS levels related to a very high excess lung cancer mortality risk. These data confirm that ETS control measures are needed to protect hospital workers.

  1. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  2. An Accurate Approach to Large-Scale IP Traffic Matrix Estimation

    Science.gov (United States)

    Jiang, Dingde; Hu, Guangmin

    This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.

  3. Further result in the fast and accurate estimation of single frequency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new fast and accurate method for estimating the frequency of a complex sinusoid in complex white Gaussian environments is proposed.The new estimator comprises of applications of low-pass filtering,decimation, and frequency estimation by linear prediction.It is computationally efficient yet obtains the Cramer-Rao bound at moderate signal-to-noise ratios.And it is well suited for real time applications requiring precise frequency estimation.Simulation results are included to demonstrate the performance of the proposed method.

  4. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    Science.gov (United States)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  5. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    Science.gov (United States)

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  7. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  8. Simple, Fast and Accurate Photometric Estimation of Specific Star Formation Rate

    CERN Document Server

    Stensbo-Smidt, Kristoffer; Igel, Christian; Zirm, Andrew; Pedersen, Kim Steenstrup

    2015-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study considers the task of estimating the specific star formation rate (sSFR) of galaxies. It is shown that a nearest neighbours algorithm can produce better sSFR estimates than traditional SED fitting. We show that we can obtain accurate estimates of the sSFR even at high redshifts using only broad-band photometry based on the u, g, r, i and z filters from Sloan Digital Sky Survey (SDSS). We addtional...

  9. Estimation and evaluation of COSMIC radio occultation excess phase using undifferenced measurements

    Science.gov (United States)

    Xia, Pengfei; Ye, Shirong; Jiang, Kecai; Chen, Dezhong

    2017-05-01

    In the GPS radio occultation technique, the atmospheric excess phase (AEP) can be used to derive the refractivity, which is an important quantity in numerical weather prediction. The AEP is conventionally estimated based on GPS double-difference or single-difference techniques. These two techniques, however, rely on the reference data in the data processing, increasing the complexity of computation. In this study, an undifferenced (ND) processing strategy is proposed to estimate the AEP. To begin with, we use PANDA (Positioning and Navigation Data Analyst) software to perform the precise orbit determination (POD) for the purpose of acquiring the position and velocity of the mass centre of the COSMIC (The Constellation Observing System for Meteorology, Ionosphere and Climate) satellites and the corresponding receiver clock offset. The bending angles, refractivity and dry temperature profiles are derived from the estimated AEP using Radio Occultation Processing Package (ROPP) software. The ND method is validated by the COSMIC products in typical rising and setting occultation events. Results indicate that rms (root mean square) errors of relative refractivity differences between undifferenced and atmospheric profiles (atmPrf) provided by UCAR/CDAAC (University Corporation for Atmospheric Research/COSMIC Data Analysis and Archive Centre) are better than 4 and 3 % in rising and setting occultation events respectively. In addition, we also compare the relative refractivity bias between ND-derived methods and atmPrf profiles of globally distributed 200 COSMIC occultation events on 12 December 2013. The statistical results indicate that the average rms relative refractivity deviation between ND-derived and COSMIC profiles is better than 2 % in the rising occultation event and better than 1.7 % in the setting occultation event. Moreover, the observed COSMIC refractivity profiles from ND processing strategy are further validated using European Centre for Medium

  10. Accurately Estimating the State of a Geophysical System with Sparse Observations: Predicting the Weather

    CERN Document Server

    An, Zhe; Abarbanel, Henry D I

    2014-01-01

    Utilizing the information in observations of a complex system to make accurate predictions through a quantitative model when observations are completed at time $T$, requires an accurate estimate of the full state of the model at time $T$. When the number of measurements $L$ at each observation time within the observation window is larger than a sufficient minimum value $L_s$, the impediments in the estimation procedure are removed. As the number of available observations is typically such that $L \\ll L_s$, additional information from the observations must be presented to the model. We show how, using the time delays of the measurements at each observation time, one can augment the information transferred from the data to the model, removing the impediments to accurate estimation and permitting dependable prediction. We do this in a core geophysical fluid dynamics model, the shallow water equations, at the heart of numerical weather prediction. The method is quite general, however, and can be utilized in the a...

  11. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  12. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  13. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  14. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    Science.gov (United States)

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  15. An Accurate Method for the BDS Receiver DCB Estimation in a Regional Network

    Directory of Open Access Journals (Sweden)

    LI Xin

    2016-08-01

    Full Text Available An accurate approach for receiver differential code biases (DCB estimation is proposed with the BDS data obtained from a regional tracking network. In contrast to the conventional methods for BDS receiver DCB estimation, the proposed method does not require a complicated ionosphere model, as long as one reference station receiver DCB is known. The main idea for this method is that the ionosphere delay is highly dependent on the geometric ranges between the BDS satellite and the receiver normally. Therefore, the non-reference station receivers DCBs in this regional area can be estimated using single difference (SD with reference stations. The numerical results show that the RMS of these estimated BDS receivers DCBs errors over 30 days are about 0.3 ns. Additionally, after deduction of these estimated receivers DCBs and knowing satellites DCBs, the extractive diurnal VTEC showed a good agreement with the diurnal VTEC gained from the GIM interpolation, indicating the reliability of the estimated receivers DCBs.

  16. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Directory of Open Access Journals (Sweden)

    Saeed Sepasi

    2015-06-01

    Full Text Available As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs, hybrid electric vehicles (HEVs and smart grids. In these applications, the battery management system (BMS requires an accurate online estimation of the state of charge (SOC in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes SOC estimation of Li-ion battery packs using a fuzzy-improved extended Kalman filter (fuzzy-IEKF for Li-ion cells, regardless of their age. The proposed approach introduces a fuzzy method with a new class and associated membership function that determines an approximate initial value applied to SOC estimation. Subsequently, the EKF method is used by considering the single unit model for the battery pack to estimate the SOC for following periods of battery use. This approach uses an adaptive model algorithm to update the model for each single cell in the battery pack. To verify the accuracy of the estimation method, tests are done on a LiFePO4 aged battery pack consisting of 120 cells connected in series with a nominal voltage of 432 V.

  17. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    Science.gov (United States)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  18. Interpolated-DFT-Based Fast and Accurate Amplitude and Phase Estimation for the Control of Power

    Directory of Open Access Journals (Sweden)

    Borkowski Józef

    2016-03-01

    Full Text Available Quality of energy produced in renewable energy systems has to be at the high level specified by respective standards and directives. One of the most important factors affecting quality is the estimation accuracy of grid signal parameters. This paper presents a method of a very fast and accurate amplitude and phase grid signal estimation using the Fast Fourier Transform procedure and maximum decay side-lobes windows. The most important features of the method are elimination of the impact associated with the conjugate’s component on the results and its straightforward implementation. Moreover, the measurement time is very short ‒ even far less than one period of the grid signal. The influence of harmonics on the results is reduced by using a bandpass pre-filter. Even using a 40 dB FIR pre-filter for the grid signal with THD ≈ 38%, SNR ≈ 53 dB and a 20‒30% slow decay exponential drift the maximum estimation errors in a real-time DSP system for 512 samples are approximately 1% for the amplitude and approximately 8.5・10‒2 rad for the phase, respectively. The errors are smaller by several orders of magnitude with using more accurate pre-filters.

  19. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  20. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Directory of Open Access Journals (Sweden)

    Hu Yongxiang

    2016-01-01

    On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL data that are collocated with in-water optical measurements.

  1. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  2. The accurate estimation of physicochemical properties of ternary mixtures containing ionic liquids via artificial neural networks.

    Science.gov (United States)

    Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S

    2015-02-14

    The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model.

  3. Accurate coronary centerline extraction, caliber estimation and catheter detection in angiographies.

    Science.gov (United States)

    Hernandez-Vela, Antonio; Gatta, Carlo; Escalera, Sergio; Igual, Laura; Martin-Yuste, Victoria; Sabate, Manel; Radeva, Petia

    2012-11-01

    Segmentation of coronary arteries in X-Ray angiography is a fundamental tool to evaluate arterial diseases and choose proper coronary treatment. The accurate segmentation of coronary arteries has become an important topic for the registration of different modalities which allows physicians rapid access to different medical imaging information from Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI). In this paper, we propose an accurate fully automatic algorithm based on Graph-cuts for vessel centerline extraction, caliber estimation, and catheter detection. Vesselness, geodesic paths, and a new multi-scale edgeness map are combined to customize the Graph-cuts approach to the segmentation of tubular structures, by means of a global optimization of the Graph-cuts energy function. Moreover, a novel supervised learning methodology that integrates local and contextual information is proposed for automatic catheter detection. We evaluate the method performance on three datasets coming from different imaging systems. The method performs as good as the expert observer w.r.t. centerline detection and caliber estimation. Moreover, the method discriminates between arteries and catheter with an accuracy of 96.5%, sensitivity of 72%, and precision of 97.4%.

  4. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    Energy Technology Data Exchange (ETDEWEB)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P. [Department of Mechanical Engineering, Imperial College, London, SW7 2AZ (United Kingdom)

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  5. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  6. Methods for accurate estimation of net discharge in a tidal channel

    Science.gov (United States)

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  7. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  8. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  9. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  10. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  11. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  12. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  13. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    Science.gov (United States)

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  14. The potential of more accurate InSAR covariance matrix estimation for land cover mapping

    Science.gov (United States)

    Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin

    2017-04-01

    Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.

  15. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  16. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Science.gov (United States)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  17. Which is the most accurate formula to estimate fetal weight in women with severe preterm preeclampsia?

    Science.gov (United States)

    Geerts, Lut; Widmer, Tania

    2011-02-01

    To identify the most accurate formula to estimate fetal weight (EFW) from ultrasound parameters in severe preterm preeclampsia. In a prospective study, serial ultrasound assessments were performed in 123 women with severe preterm preeclampsia. The EFW, calculated for 111 live born, normal, singleton fetuses within 7 days of delivery using 38 published formulae, was compared to the actual birth weight (ABW). Accuracy was assessed by correlations, mean (absolute and signed) (%) errors, % correct predictions within 5-20% of ABW and limits of agreement. Accuracy was highly variable. Most formulae systematically overestimated ABW. Five Hadlock formulae utilizing three or four variables and Woo 3 formula had the highest accuracy and did not differ significantly (mean absolute % errors 6.8-7.2%, SDs 5.3-5.8%, > 75% of estimations within 10% of ABW and 95% limits of agreement between -18/20% and +14/15%). They were not negatively affected by clinical variables but had some inconsistency in bias over the ABW range. All other formulae, including those targeted for small, preterm or growth restricted fetuses, were inferior and/or affected by multiple clinical variables. In this GA window, Hadlock formulae using three or four variables or Woo 3 formula can be recommended.

  18. mBEEF: An accurate semi-local Bayesian error estimation density functional

    Science.gov (United States)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  19. Robust and Accurate Multiple-camera Pose Estimation Toward Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-09-01

    Full Text Available Pose estimation methods in robotics applications frequently suffer from inaccuracy due to a lack of correspondence and real-time constraints, and instability from a wide range of viewpoints, etc. In this paper, we present a novel approach for estimating the poses of all the cameras in a multi-camera system in which each camera is placed rigidly using only a few coplanar points simultaneously. Instead of solving the orientation and translation for the multi-camera system from the overlapping point correspondences among all the cameras directly, we employ homography, which can map image points with 3D coplanar-referenced points. In our method, we first establish the corresponding relations between each camera by their Euclidean geometries and optimize the homographies of the cameras; then, we solve the orientation and translation for the optimal homographies. The results from simulations and real case experiments show that our approach is accurate and robust for implementation in robotics applications. Finally, a practical implementation in a ping-pong robot is described in order to confirm the validity of our approach.

  20. Exclusion of measurements with excessive residuals (blunders) in estimating model parameters

    CERN Document Server

    Nikiforov, I I

    2013-01-01

    An adjustable algorithm of exclusion of conditional equations with excessive residuals is proposed. The criteria applied in the algorithm use variable exclusion limits which decrease as the number of equations goes down. The algorithm is easy to use, it possesses rapid convergence, minimal subjectivity, and high degree of generality.

  1. Does venous blood gas analysis provide accurate estimates of hemoglobin oxygen affinity?

    Science.gov (United States)

    Huber, Fabienne L; Latshang, Tsogyal D; Goede, Jeroen S; Bloch, Konrad E

    2013-04-01

    Alterations in hemoglobin oxygen affinity can be detected by exposing blood to different PO2 and recording oxygen saturation, a method termed tonometry. It is the gold standard to measure the PO2 associated with 50 % oxygen saturation, the index used to quantify oxygen affinity (P50Tono). P50Tono is used in the evaluation of patients with erythrocytosis suspected to have hemoglobin with abnormal oxygen affinity. Since tonometry is labor intensive and not generally available, we investigated whether accurate estimates of P50 could also be obtained by venous blood gas analysis, co-oximetry, and standard equations (P50Ven). In 50 patients referred for evaluation of erythrocytosis, pH, PO2, and oxygen saturation were measured in venous blood to estimate P50Ven; P50Tono was measured for comparison. Agreement among P50Ven and P50Tono was evaluated (Bland-Altman analysis). Mean P50Tono was 25.8 (range 17.4-34.1) mmHg. The mean difference (bias) of P50Tono-P50Ven was 0.5 mmHg; limits of agreement (95 % confidence limits) were -5.2 to +6.1 mmHg. The sensitivity and specificity of P50Ven to identify the 25 patients with P50Tono outside the normal range of 22.9-26.8 mmHg were 5 and 77 %, respectively. We conclude that estimates of P50 based on venous blood gas analysis and standard equations have a low bias compared to tonometry. However, the precision of P50Ven is not sufficiently high to replace P50Tono in the evaluation of individual patients with suspected disturbances of hemoglobin oxygen affinity.

  2. Accurate optical flow field estimation using mechanical properties of soft tissues

    Science.gov (United States)

    Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas

    2009-02-01

    A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.

  3. Optimization of Correlation Kernel Size for Accurate Estimation of Myocardial Contraction and Relaxation

    Science.gov (United States)

    Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.

  4. Transthoracic echocardiography: an accurate and precise method for estimating cardiac output in the critically ill patient.

    Science.gov (United States)

    Mercado, Pablo; Maizel, Julien; Beyls, Christophe; Titeca-Beauport, Dimitri; Joris, Magalie; Kontar, Loay; Riviere, Antoine; Bonef, Olivier; Soupison, Thierry; Tribouilloy, Christophe; de Cagny, Bertrand; Slama, Michel

    2017-06-09

    % yielded a sensitivity of 88% and specificity of 66% for detecting a ΔCO-PAC of more than 10%. In critically ill mechanically ventilated patients, CO-TTE is an accurate and precise method for estimating CO. Furthermore, CO-TTE can accurately track variations in CO.

  5. Estimation of physical activity and prevalence of excessive body mass in rural and urban Polish adolescents.

    Science.gov (United States)

    Hoffmann, Karolina; Bryl, Wiesław; Marcinkowski, Jerzy T; Strażyńska, Agata; Pupek-Musialik, Danuta

    2011-01-01

    Excessive body mass and sedentary lifestyle are well-known factors for cardiovascular risk, which when present in the young population may have significant health consequences, both in the short- and long-term. The aim of the study was to evaluate the prevalence of overweight, obesity, and sedentary lifestyle in two teenage populations living in an urban or rural area. An additional aim was to compare their physical activity. The study was designed and conducted in 2009. The study population consisted of 116 students aged 15-17 years - 61 males (52.7%) and 55 females (47.3%), randomly selected from public junior grammar schools and secondary schools in the Poznań Region. There were 61 respondents from a rural area - 32 males (52.5%) and 29 females (47.5%), whereas 55 teenagers lived in an urban area - 29 males (47.5%) and 26 females (47.3%). Students were asked to complete a questionnaire, which was especially prepared for the study and contained questions concerning health and lifestyle. A basic physical examination was carried out in all 116 students, including measurements of the anthropometric features. Calculations were performed using the statistical package STATISTICA (data analysis software system), Version. 8.0. When comparing these two populations, no statistically significant differences were detected in the ratio of weight-growth, with the exception of the fact that the urban youths had a larger hip circumference (97.1 v. 94.3 cm, p0.05), the problem of excessive weight affected both sexes in a similar proportion (25% boys and 24.1% girls, p>0.05). In this paper it is shown that there were differences concerning physical activity of teenagers living in urban and rural areas. Urban students much more often declared an active lifestyle (72.7% v.42.6%, pactivity (not counting compulsory physical education classes).

  6. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    Directory of Open Access Journals (Sweden)

    Eliane Lopes Rosado

    2014-03-01

    Full Text Available Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE, compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect calorimetry with respiratory hood. Results: In G1 and G2, it was found that the estimates obtained by Harris-Benedict, Shofield, FAO/WHO/ ONU and Henry & Rees did not differ from EE using indirect calorimetry, which presented higher values than the equations proposed by Owen, Mifflin-St Jeor and Oxford. For G1 and G2 the predictive equation closest to the value obtained by the indirect calorimetry was the FAO/WHO/ONU (7.9% and 0.46% underestimation, respectively, followed by Harris-Benedict (8.6% and 1.5% underestimation, respectively. Conclusion: The equations proposed by FAO/WHO/ ONU, Harris-Benedict, Shofield and Henry & Rees were adequate to estimate the EE in a sample of Brazilian and Spanish women with excess body weight. The other equations underestimated the EE.

  7. Analytical estimation of control rod shadowing effect for excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR)

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Masaaki; Yamashita, Kiyonobu; Fujimoto, Nozomu; Nojiri, Naoki; Takeuchi, Mitsuo; Fujisaki, Shingo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Tokuhara, Kazumi; Nakata, Tetsuo

    1998-05-01

    The control rod shadowing effect has been estimated analytically in application of the fuel addition method to excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR). The movements of control rods in the procedure of the fuel addition method have been simulated in the analysis. The calculated excess reactivity obtained by the simulation depends on the combinations of measuring control rods and compensating control rods and varies from -10% to +50% in comparison with the excess reactivity calculated from the effective multiplication factor of the core where all control rods are fully withdrawn. The control rod shadowing effect is reduced by the use of plural number of measuring and compensation control rods because of the reduction in neutron flux deformation in the measuring procedure. As a result, following combinations of control rods are recommended; 1) Thirteen control rods of the center, first, and second rings will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other twelve control rods for reactivity compensation. 2) Six control rods of the first ring will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other five control rods for reactivity compensation. (author)

  8. Signal-intensity-ratio MRI accurately estimates hepatic iron load in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Guy Rostoker

    2017-01-01

    Conclusions: This pilot study shows that liver iron determination based on signal-intensity-ratio MRI (Rennes University algorithm very accurately identifies iron load in hemodialysis patients, by comparison with liver histology.

  9. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters☆

    Science.gov (United States)

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2015-01-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. PMID:25087857

  10. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    Science.gov (United States)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  11. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the ex...

  12. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Directory of Open Access Journals (Sweden)

    Li Jing

    2014-08-01

    Full Text Available This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL system. An approach that combines angle with time difference of arrival (ATDOA is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS technique is applied in this approach. It achieves the Cramer–Rao lower bound (CRLB when the parameter measurements are subject to small Gaussian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  13. Accurate performance estimators for information retrieval based on span bound of support vector machines

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Support vector machines have met with significant success in the information retrieval field, especially in handling text classification tasks. Although various performance estimators for SVMs have been proposed,these only focus on accuracy which is based on the leave-one-out cross validation procedure. Information-retrieval-related performance measures are always neglected in a kernel learning methodology. In this paper, we have proposed a set of information-retrieval-oriented performance estimators for SVMs, which are based on the span bound of the leave-one-out procedure. Experiments have proven that our proposed estimators are both effective and stable.

  14. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    Science.gov (United States)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  15. The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.

    Science.gov (United States)

    Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe

    2013-07-01

    There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.

  16. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    Science.gov (United States)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  17. Comparing the standards of one metabolic equivalent of task in accurately estimating physical activity energy expenditure based on acceleration.

    Science.gov (United States)

    Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young

    2016-08-24

    The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O2 · kg(-1) · min(-1) and individually measured resting VO2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O2 · kg(-1) · min(-1). Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.

  18. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  19. Accurate kinetic parameter estimation during progress curve analysis of systems with endogenous substrate production.

    Science.gov (United States)

    Goudar, Chetan T

    2011-10-01

    We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.

  20. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  1. Precise Estimation of Cosmological Parameters Using a More Accurate Likelihood Function

    Science.gov (United States)

    Sato, Masanori; Ichiki, Kiyotomo; Takeuchi, Tsutomu T.

    2010-12-01

    The estimation of cosmological parameters from a given data set requires a construction of a likelihood function which, in general, has a complicated functional form. We adopt a Gaussian copula and constructed a copula likelihood function for the convergence power spectrum from a weak lensing survey. We show that the parameter estimation based on the Gaussian likelihood erroneously introduces a systematic shift in the confidence region, in particular, for a parameter of the dark energy equation of state w. Thus, the copula likelihood should be used in future cosmological observations.

  2. A Void Reference Sensor-Multiple Signal Classification Algorithm for More Accurate Direction of Arrival Estimation of Low Altitude Target

    Institute of Scientific and Technical Information of China (English)

    XIAO Hui; SUN Jin-cai; YUAN Jun; NIU Yi-long

    2007-01-01

    There exists MUSIC (multiple signal classification) algorithm for direction of arrival (DOA) estimation. This paper is to present a different MUSIC algorithm for more accurate estimation of low altitude target. The possibility of better performance is analyzed using a void reference sensor (VRS) in MUSIC algorithm. The following two topics are discussed: 1) the time delay formula and VRS-MUSIC algorithm with VRS located on the minus of z-axes; 2) the DOA estimation results of VRS-MUSIC and MUSIC algorithms. The simulation results show VRS-MUSIC algorithm has three advantages compared with MUSIC: 1 ) When the signal to noise ratio (SNR) is more than - 5 dB, the direction estimation error is 1/2 as much as that obtained by MUSIC; 2) The side lobe is more lower and the stability is better; 3) The size of array that the algorithm requires is smaller.

  3. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  4. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  5. Fast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data

    Directory of Open Access Journals (Sweden)

    Iliadis Alexandros

    2012-10-01

    Full Text Available Abstract Background Typically, the first phase of a genome wide association study (GWAS includes genotyping across hundreds of individuals and validation of the most significant SNPs. Allelotyping of pooled genomic DNA is a common approach to reduce the overall cost of the study. Knowledge of haplotype structure can provide additional information to single locus analyses. Several methods have been proposed for estimating haplotype frequencies in a population from pooled DNA data. Results We introduce a technique for haplotype frequency estimation in a population from pooled DNA samples focusing on datasets containing a small number of individuals per pool (2 or 3 individuals and a large number of markers. We compare our method with the publicly available state-of-the-art algorithms HIPPO and HAPLOPOOL on datasets of varying number of pools and marker sizes. We demonstrate that our algorithm provides improvements in terms of accuracy and computational time over competing methods for large number of markers while demonstrating comparable performance for smaller marker sizes. Our method is implemented in the "Tree-Based Deterministic Sampling Pool" (TDSPool package which is available for download at http://www.ee.columbia.edu/~anastas/tdspool. Conclusions Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.

  6. Do wavelet filters provide more accurate estimates of reverberation times at low frequencies

    DEFF Research Database (Denmark)

    Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.

    2016-01-01

    the continuous wavelet transform (CTW) has been implemented using a Morlet mother function. Although in general, the wavelet filter bank performs better than the usual filters, the influence of decaying modes outside the filter bandwidth on the measurements has been detected, leading to a biased estimation...

  7. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  8. Accurate estimation of dose distributions inside an eye irradiated with {sup 106}Ru plaques

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, L.; Sauerwein, W. [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Sempau, J.; Zaragoza, F.J. [Universitat Politecnica de Catalunya, Barcelona (Spain). Inst. de Tecniques Energetiques; Wittig, A. [Marburg Univ. (Germany). Klinik fuer Strahlentherapie und Radioonkologie

    2013-01-15

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with {sup 106}Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of {sup 106}Ru over {sup 106}Rh into {sup 106}Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step

  9. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  10. Accurate Estimation of Orientation Parameters of Uav Images Through Image Registration with Aerial Oblique Imagery

    Science.gov (United States)

    Onyango, F. A.; Nex, F.; Peter, M. S.; Jende, P.

    2017-05-01

    Unmanned Aerial Vehicles (UAVs) have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS) receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU). Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate. Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe's ratio test (Lowe, 2004) is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974) is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.

  11. ACCURATE ESTIMATION OF ORIENTATION PARAMETERS OF UAV IMAGES THROUGH IMAGE REGISTRATION WITH AERIAL OBLIQUE IMAGERY

    Directory of Open Access Journals (Sweden)

    F. A. Onyango

    2017-05-01

    Full Text Available Unmanned Aerial Vehicles (UAVs have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU. Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate. Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe’s ratio test (Lowe, 2004 is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974 is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.

  12. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes...

  13. Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model

    Directory of Open Access Journals (Sweden)

    Omnia S. Fadl

    2016-01-01

    Full Text Available Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes’ toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes’ toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.

  14. Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo

    Science.gov (United States)

    Zhang, Chao; Zhou, Fugen; Xue, Bindang

    2017-02-01

    In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.

  15. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Directory of Open Access Journals (Sweden)

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  16. Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance

    CERN Document Server

    Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2016-01-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...

  17. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research

    Directory of Open Access Journals (Sweden)

    Miguel Angel Luque-Fernandez

    2016-10-01

    Full Text Available Abstract Background In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean. Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. Methods We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. Results All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001. However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3 for non-flexible piecewise exponential models. Conclusion We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  18. Using aircraft as wind sensors for estimating accurate wind fields for air traffic management applications

    OpenAIRE

    Hernando Guadaño, Laura; Arnaldo Valdes, Rosa Maria; Saez Nieto, Francisco Javier

    2014-01-01

    A study which examines the use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction is presented in this paper. We describe not only different sources in the aircraft systems that provide the variables needed to derivate the wind velocity but the capabilities which allow us to present this information for ATM Applications. Based on wind speed samples from aircraft landing at Madrid-Barajas airport, a real-time wind fie...

  19. Rapid and Accurate Estimates of Alloy Phase Diagrams for Design and Assessment

    Science.gov (United States)

    Tan, Teck; Johnson, Duane

    2009-03-01

    Based on first-principles cluster expansion (CE), we obtain rapid but accurate assessments of alloy T vs c phase diagrams from a mean-field theory that conserves sum rules over pair correlations. Such conserving mean-field theories are less complicated than the popular cluster variation method, and better reproduce the Monte Carlo (MC) phase boundaries and Tc for the nearest-neighbor Ising model [1]. The free-energy f(T,c) is a simple analytic expression and its value at fixed T or c is obtained by solving a set of n non-linear coupled equations, where n is determined by the number of sublattices in the groundstate structure and the range of pair correlations included. While MC is ``exact,'' conserving mean-field theories are 10 to 10^3 faster, allowing for rapid phase diagram construction, dramatically saving computation time. We have generalized the method to account for multibody interactions to enable phase diagram calculations via first-principles CE, and its accuracy is showed vis-à-vis exact MC for several alloy systems. The method is included in our Thermodynamic ToolKit (TTK), available for general use in 2009. [1] V. I. Tokar, Comput. Mater. Sci. 8 (1997), p.8

  20. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  1. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    Science.gov (United States)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  2. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  3. Assessing estimates of radiative forcing for solar geoengineering starts with accurate aerosol radiative properties

    Science.gov (United States)

    Dykema, J. A.; Keith, D.; Keutsch, F. N.

    2016-12-01

    The deliberate modification of Earth's albedo as a complement to mitigation in order to slow climate change brings with it a range of risks. A range of different approaches have been studied, including the injection of aerosol particles into the stratosphere to decrease solar energy input into the climate system. Key side effects from this approach include ozone loss and radiative heating. Both of these side effects may produce dynamical changes with further consequences for stratospheric and tropospheric climate. Studies of past volcanic eruptions suggest that sulfate aerosol injection may be capable of achieving a compensating radiative forcing of -1 W m-2 or more. It is also expected that such injection of sulfate aerosols will result in loss of stratospheric ozone and of significant infrared heating. The problems resulting from sulfate aerosols intended have motivated the investigation of alternative materials, including high refractive index solid materials. High refractive index materials have the potential to scatter more efficiently per unit mass, leading to a reduction in surface area for heterogeneous chemistry, and, depending on details of absorption, less radiative heating. Fundamentally, assessing these trade-offs requires accurate knowledge of the complex refractive index of materials being considered over the full range of wavelengths relevant to atmospheric radiative transfer, that is, from ultraviolet to far-infrared. Our survey of the relevant literature finds that such measurements are not available for all materials of interest at all wavelengths. We utilize a method developed in astrophysics to fill in spectral gaps, and find that some materials may heat the stratosphere substantially more than was found in previous work. Stratospheric heating can warm the tropical tropopause layer, increasing the flux of water vapor into the stratosphere, with further consequences for atmospheric composition and radiative forcing. We analyze this consequence

  4. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    Science.gov (United States)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  5. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    Science.gov (United States)

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  6. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    CERN Document Server

    Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V

    2015-01-01

    We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...

  7. Accurate parameter estimation for star formation history in galaxies using SDSS spectra

    CERN Document Server

    Richards, Joseph W; Lee, Ann B; Schafer, Chad M

    2009-01-01

    To further our knowledge of the complex physical process of galaxy formation, it is essential that we characterize the formation and evolution of large databases of galaxies. The spectral synthesis STARLIGHT code of Cid Fernandes et al. (2004) was designed for this purpose. Results of STARLIGHT are highly dependent on the choice of input basis of simple stellar population (SSP) spectra. Speed of the code, which uses random walks through the parameter space, scales as the square of the number of basis spectra, making it computationally necessary to choose a small number of SSPs that are coarsely sampled in age and metallicity. In this paper, we develop methods based on diffusion map (Lafon & Lee, 2006) that, for the first time, choose appropriate bases of prototype SSP spectra from a large set of SSP spectra designed to approximate the continuous grid of age and metallicity of SSPs of which galaxies are truly composed. We show that our techniques achieve better accuracy of physical parameter estimation for...

  8. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    Science.gov (United States)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  9. Accurate and robust estimation of phase error and its uncertainty of 50 GHz bandwidth sampling circuit

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper discusses the dependence of the phase error on the 50 GHz bandwidth oscilloscope's sampling circuitry- We give the definition of the phase error as the difference between the impulse responses of the NTN (nose-to-nose) estimate and the true response of the sampling circuit. We develop a method to predict the NTN phase response arising from the internal sampling circuitry of the oscilloscope. For the default sampling-circuit configuration that we examine, our phase error is approximately 7.03 at 50 GHz. We study the sensitivity of the oscilloscope's phase response to parametric changes in sampling-circuit component values. We develop procedures to quantify the sensitivity of the phase error to each component and to a combination of components that depend on the fractional uncertainty in each of the model parameters as the same value, 10%. We predict the upper and lower bounds of phase error, that is, we vary all of the circuit parameters simultaneously in such a way as to increase the phase error, and then vary all of the circuit parameters to decrease the phase error. Based on Type B evaluation, this method qualifies the impresses of all parameters of the sampling circuit and gives the value of standard uncertainty, 1.34. This result is developed at the first time and has important practical uses. It can be used for phase calibration in the 50 GHz bandwidth large signal network analyzers (LSNAs).

  10. A novel method to obtain accurate length estimates of carnivorous reef fishes from a single video camera

    Directory of Open Access Journals (Sweden)

    Gastón A. Trobbiani

    Full Text Available In the last years, technological advances enhanced the utilization of baited underwater video (BUV to monitor the diversity, abundance, and size composition of fish assemblages. However, attempts to use static single-camera devices to estimate fish length were limited due to high errors, originated from the variable distance between the fishes and the reference scale included in the scene. In this work, we present a novel simple method to obtain accurate length estimates of carnivorous fishes by using a single downward-facing camera baited video station. The distinctive feature is the inclusion of a mirrored surface at the base of the stand that allows for correcting the apparent or "naive" length of the fish by the distance between the fish and the reference scale. We describe the calibration procedure and compare the performance (accuracy and precision of this new technique with that of other single static camera methods. Overall, estimates were highly accurate (mean relative error = -0.6% and precise (mean coefficient of variation = 3.3%, even in the range of those obtained with stereo-video methods.

  11. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    Science.gov (United States)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  12. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  13. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    Science.gov (United States)

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  14. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  15. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  16. Highly accurate and efficient self-force computations using time-domain methods: Error estimates, validation, and optimization

    CERN Document Server

    Thornburg, Jonathan

    2010-01-01

    If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...

  17. Impact of interfacial high-density water layer on accurate estimation of adsorption free energy by Jarzynski's equality

    Science.gov (United States)

    Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang

    2014-01-01

    The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.

  18. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Directory of Open Access Journals (Sweden)

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  19. Bisulfite-based epityping on pooled genomic DNA provides an accurate estimate of average group DNA methylation

    Directory of Open Access Journals (Sweden)

    Docherty Sophia J

    2009-03-01

    Full Text Available Abstract Background DNA methylation plays a vital role in normal cellular function, with aberrant methylation signatures being implicated in a growing number of human pathologies and complex human traits. Methods based on the modification of genomic DNA with sodium bisulfite are considered the 'gold-standard' for DNA methylation profiling on genomic DNA; however, they require relatively large amounts of DNA and may be prohibitively expensive when used on the large sample sizes necessary to detect small effects. We propose that a high-throughput DNA pooling approach will facilitate the use of emerging methylomic profiling techniques in large samples. Results Compared with data generated from 89 individual samples, our analysis of 205 CpG sites spanning nine independent regions of the genome demonstrates that DNA pools can be used to provide an accurate and reliable quantitative estimate of average group DNA methylation. Comparison of data generated from the pooled DNA samples with results averaged across the individual samples comprising each pool revealed highly significant correlations for individual CpG sites across all nine regions, with an average overall correlation across all regions and pools of 0.95 (95% bootstrapped confidence intervals: 0.94 to 0.96. Conclusion In this study we demonstrate the validity of using pooled DNA samples to accurately assess group DNA methylation averages. Such an approach can be readily applied to the assessment of disease phenotypes reducing the time, cost and amount of DNA starting material required for large-scale epigenetic analyses.

  20. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    Directory of Open Access Journals (Sweden)

    M. Montes-Hugo

    2014-06-01

    Full Text Available The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor, EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM for estimating the phytoplankton absorption coefficient at 443 nm (aph(443 and the chlorophyll concentration (chl in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443 estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013. A change on the inversion approach used for estimating aph(443 values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System default values for the optical cross section of phytoplankton (i.e., aph(443 = aph(443/chl = 0.056 m2mg−1, the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443 retrievals and with respect to in situ determinations increased up to 29%.

  1. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Science.gov (United States)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  2. The absolute lymphocyte count accurately estimates CD4 counts in HIV-infected adults with virologic suppression and immune reconstitution

    Directory of Open Access Journals (Sweden)

    Barnaby Young

    2014-11-01

    Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24

  3. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  4. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  5. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  6. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    Institute of Scientific and Technical Information of China (English)

    Zheng Wei; Hsu Hou-Tse; Zhong Min; Yun Mei-Juan

    2009-01-01

    Firstly,the new combined error model of cumulative geoid height influenced by four error sources,including the inter-satellite range-rate of an interferometric laser (K-band) ranging system,the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer,is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach.Secondly,the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985×10-1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL),and the cumulative geoid height error is 5.825×10-2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km.The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward,and the dependability of the combined error model is validated.Finally,the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes.

  7. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    Science.gov (United States)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  8. Challenges associated with drunk driving measurement: combining police and self-reported data to estimate an accurate prevalence in Brazil.

    Science.gov (United States)

    Sousa, Tanara; Lunnen, Jeffrey C; Gonçalves, Veralice; Schmitz, Aurinez; Pasa, Graciela; Bastos, Tamires; Sripad, Pooja; Chandran, Aruna; Pechansky, Flavio

    2013-12-01

    Drunk driving is an important risk factor for road traffic crashes, injuries and deaths. After June 2008, all drivers in Brazil were subject to a "Zero Tolerance Law" with a set breath alcohol concentration of 0.1 mg/L of air. However, a loophole in this law enabled drivers to refuse breath or blood alcohol testing as it may self-incriminate. The reported prevalence of drunk driving is therefore likely a gross underestimate in many cities. To compare the prevalence of drunk driving gathered from police reports to the prevalence gathered from self-reported questionnaires administered at police sobriety roadblocks in two Brazilian capital cities, and to estimate a more accurate prevalence of drunk driving utilizing three correction techniques based upon information from those questionnaires. In August 2011 and January-February 2012, researchers from the Centre for Drug and Alcohol Research at the Universidade Federal do Rio Grande do Sul administered a roadside interview on drunk driving practices to 805 voluntary participants in the Brazilian capital cities of Palmas and Teresina. Three techniques which include measures such as the number of persons reporting alcohol consumption in the last six hours but who had refused breath testing were used to estimate the prevalence of drunk driving. The prevalence of persons testing positive for alcohol on their breath was 8.8% and 5.0% in Palmas and Teresina respectively. Utilizing a correction technique we calculated that a more accurate prevalence in these sites may be as high as 28.2% and 28.7%. In both cities, about 60% of drivers who self-reported having drank within six hours of being stopped by the police either refused to perform breathalyser testing; fled the sobriety roadblock; or were not offered the test, compared to about 30% of drivers that said they had not been drinking. Despite the reduction of the legal limit for drunk driving stipulated by the "Zero Tolerance Law," loopholes in the legislation permit many

  9. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Science.gov (United States)

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.

  10. A relationship to estimate the excess entropy of mixing: Application in silicate solid solutions and binary alloys.

    Science.gov (United States)

    Benisek, Artur; Dachs, Edgar

    2012-06-25

    The paper presents new calorimetric data on the excess heat capacity and vibrational entropy of mixing of Pt-Rh and Ag-Pd alloys. The results of the latter alloy are compared to those obtained by calculations using the density functional theory. The extent of the excess vibrational entropy of mixing of these binaries and of some already investigated binary mixtures is related to the differences of the end-member volumes and the end-member bulk moduli. These quantities are used to roughly represent the changes of the bond length and stiffness in the substituted and substituent polyhedra due to compositional changes, which are assumed to be the important factors for the non-ideal vibrational behaviour in solid solutions.

  11. Improving full-cardiac cycle strain estimation from tagged CMR by accurate modeling of 3D image appearance characteristics

    Directory of Open Access Journals (Sweden)

    Matt Nitzken

    2016-03-01

    Full Text Available To improve the tagged cardiac magnetic resonance (CMR image analysis, we propose a 3D (2D space + 1D time energy minimization framework, based on learning first- and second-order visual appearance models from voxel intensities. The former model approximates the marginal empirical distribution of intensities with two linear combinations of discrete Gaussians (LCDG. The second-order model considers an image of a sample from a translation–rotation invariant 3D Markov–Gibbs random field (MGRF with multiple pairwise spatiotemporal interactions within and between adjacent temporal frames. Abilities of the framework to accurately recover noise-corrupted strain slopes were experimentally evaluated and validated on 3D geometric phantoms and independently on in vivo data. In multiple noise and motion conditions, the proposed method outperformed comparative image filtering in restoring strain curves and reliably improved HARP strain tracking during the entirety of the cardiac cycle. According to these results, our framework can augment popular spectral domain techniques, such as HARP, by optimizing the spectral domain characteristics and thereby providing more reliable estimates of strain parameters.

  12. Excessive Daytime Sleepiness

    OpenAIRE

    Yavuz Selvi; Ali Kandeger; Ayca Asena Sayin

    2016-01-01

    Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with ...

  13. Excessive somnolence

    Directory of Open Access Journals (Sweden)

    Stella Tavares

    Full Text Available Excessive somnolence can be quite a incapacitating manifestation, and is frequently neglected by physicians and patients. This article reviews the determinant factors, the evaluation and quantification of diurnal somnolence, and the description and treatment of the main causes of excessive somnolence.

  14. Excessive Daytime Sleepiness

    Directory of Open Access Journals (Sweden)

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  15. On the Specification of the Gravity Model of Trade: Zeros, Excess Zeros and Zero-Inflated Estimation

    NARCIS (Netherlands)

    M.J. Burger (Martijn); F.G. van Oort (Frank); G.J.M. Linders (Gert-Jan)

    2009-01-01

    textabstractConventional studies of bilateral trade patterns specify a log-normal gravity equation for empirical estimation. However, the log-normal gravity equation suffers from three problems: the bias created by the logarithmic transformation, the failure of the homoscedasticity assumption, and t

  16. Estimating the economic value of ice climbing in Hyalite Canyon: An application of travel cost count data models that account for excess zeros.

    Science.gov (United States)

    Anderson, D Mark

    2010-01-01

    Recently, the sport of ice climbing has seen a dramatic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In addition to the novel outdoor recreation application, this study applies econometric methods designed to deal with "excess zeros" in the data. Depending upon model specification, per person per trip values are estimated to be in the range of $76 to $135.

  17. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Science.gov (United States)

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  18. Impact of measurement error in radon exposure on the estimated excess relative risk of lung cancer death in a simulated study based on the French Uranium Miners' Cohort.

    Science.gov (United States)

    Allodji, Rodrigue S; Leuraud, Klervi; Thiébaut, Anne C M; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

    2012-05-01

    Measurement error (ME) can lead to bias in the analysis of epidemiologic studies. Here a simulation study is described that is based on data from the French Uranium Miners' Cohort and that was conducted to assess the effect of ME on the estimated excess relative risk (ERR) of lung cancer death associated with radon exposure. Starting from a scenario without any ME, data were generated containing successively Berkson or classical ME depending on time periods, to reflect changes in the measurement of exposure to radon ((222)Rn) and its decay products over time in this cohort. Results indicate that ME attenuated the level of association with radon exposure, with a negative bias percentage on the order of 60% on the ERR estimate. Sensitivity analyses showed the consequences of specific ME characteristics (type, size, structure, and distribution) on the ERR estimates. In the future, it appears important to correct for ME upon analyzing cohorts such as this one to decrease bias in estimates of the ERR of adverse events associated with exposure to ionizing radiation.

  19. Methodological extensions of meta-analysis with excess relative risk estimates: application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    Science.gov (United States)

    Doi, Kazutaka; Mieno, Makiko N.; Shimada, Yoshiya; Yonehara, Hidenori; Yoshinaga, Shinji

    2014-01-01

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95%CI: 0.30–1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1–2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. PMID:25037101

  20. A strategy for an accurate estimation of the basal permittivity in the Martian North Polar Layered Deposits

    CERN Document Server

    Lauro, S E; Pettinelli, E; Soldovieri, F; Cantini, F; Rossi, A P; Orosei, R

    2016-01-01

    The paper deals with the investigation of the Mars subsurface by means of data collected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding working at few MHz frequencies. A data processing strategy, which combines a simple inversion model and an accurate procedure for data selection is presented. This strategy permits to mitigate the theoretical and practical difficulties of the inverse problem arising because of the inaccurate knowledge of the parameters regarding both the scenario under investigation and the radiated electromagnetic field impinging on the Mars surface. The results presented in this paper show that, it is possible to reliably retrieve the electromagnetic properties of deeper structures, if such strategy is accurately applied. An example is given here, where the analysis of the data collected on Gemina Lingula, a region of the North Polar layer deposits, allowed us to retrieve permittivity values for the basal unit in agreement with those usually associated to the Earth basalt...

  1. Counseling for fetal macrosomia: an estimated fetal weight of 4,000 g is excessively low.

    Science.gov (United States)

    Peleg, David; Warsof, Steven; Wolf, Maya Frank; Perlitz, Yuri; Shachar, Inbar Ben

    2015-01-01

    Because of the known complications of fetal macrosomia, our hospital's policy has been to discuss the risks of shoulder dystocia and cesarean section (CS) in mothers with a sonographic estimated fetal weight (SEFW) ≥ 4,000 g at term. The present study was performed to determine the effect of this policy on CS rates and pregnancy outcome. We examined the pregnancy outcomes of the macrosomic (≥ 4,000 g) neonates in two cohorts of nondiabetic low risk women at term without preexisting indications for cesarean: (1) SEFW ≥ 4,000 g (correctly suspected macrosomia) and (2) SEFW macrosomia). There were 238 neonates in the correctly suspected group and 205 neonates in the unsuspected macrosomia group, respectively. Vaginal delivery was accomplished in 52.1% of the suspected group and 90.7% of the unsuspected group, respectively, p macrosomia was correctly suspected. The policy of discussing the risk of macrosomia with SEFW ≥ 4,000 g to women is not justified. A higher SEFW to trigger counseling for shoulder dystocia and CS, more consistent with American College of Obstetrics and Gynecology (ACOG) guidelines, should be considered. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    Directory of Open Access Journals (Sweden)

    Xuebing Yuan

    2015-05-01

    Full Text Available Inertial navigation based on micro-electromechanical system (MEMS inertial measurement units (IMUs has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  3. 基于泛岭估计对岭估计过度压缩的改进方法%An Improved Method based Universal Ridge Estimate for the Excess Shrinkage of Ridge Estimate

    Institute of Scientific and Technical Information of China (English)

    刘文卿

    2011-01-01

    Ridge estimate is an effective method to solve the problem of multicollinearity in multiple linear regression, and is a biased shrinkage estimate. Against ordinary least squares estimate, ridge estimate decreases mean square errors, but increases residual sum of squares. This paper proposes an improved method based universal ridge estimate for the excess shrinkage of ridge estimate. Which method can make better the effect of fit, reduces the residual sum of squares contrast to ridge estimate.%岭估计是解决多元线性回归多重共线性问题的有效方法,是有偏的压缩估计。与普通最小二乘估计相比,岭估计可以降低参数估计的均方误差,但是却增大残差平方和,拟合效果变差。本文提出一种基于泛岭估计对岭估计过度压缩的改进方法,可以改进岭估计的拟合效果,减小岭估计残差平方和的增加幅度。

  4. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt

    Directory of Open Access Journals (Sweden)

    Kapil Yadav

    2015-01-01

    Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  5. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  6. Measurement of pelvic motion is a prerequisite for accurate estimation of hip joint work in maximum height squat jumping.

    Science.gov (United States)

    Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M

    2013-08-01

    In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.

  7. How have ART treatment programmes changed the patterns of excess mortality in people living with HIV? Estimates from four countries in East and Southern Africa

    Directory of Open Access Journals (Sweden)

    Emma Slaymaker

    2014-04-01

    Full Text Available Background: Substantial falls in the mortality of people living with HIV (PLWH have been observed since the introduction of antiretroviral therapy (ART in sub-Saharan Africa. However, access and uptake of ART have been variable in many countries. We report the excess deaths observed in PLWH before and after the introduction of ART. We use data from five longitudinal studies in Malawi, South Africa, Tanzania, and Uganda, members of the network for Analysing Longitudinal Population-based HIV/AIDS data on Africa (ALPHA. Methods: Individual data from five demographic surveillance sites that conduct HIV testing were used to estimate mortality attributable to HIV, calculated as the difference between the mortality rates in PLWH and HIV-negative people. Excess deaths in PLWH were standardized for age and sex differences and summarized over periods before and after ART became generally available. An exponential regression model was used to explore differences in the impact of ART over the different sites. Results: 127,585 adults across the five sites contributed a total of 487,242 person years. Before the introduction of ART, HIV-attributable mortality ranged from 45 to 88 deaths per 1,000 person years. Following ART availability, this reduced to 14–46 deaths per 1,000 person years. Exponential regression modeling showed a reduction of more than 50% (HR =0.43, 95% CI: 0.32–0.58, compared to the period before ART was available, in mortality at ages 15–54 across all five sites. Discussion: Excess mortality in adults living with HIV has reduced by over 50% in five communities in sub-Saharan Africa since the advent of ART. However, mortality rates in adults living with HIV are still 10 times higher than in HIV-negative people, indicating that substantial improvements can be made to reduce mortality further. This analysis shows differences in the impact across the sites, and contrasts with developed countries where mortality among PLWH on ART can be

  8. HIV Excess Cancers JNCI

    Science.gov (United States)

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  9. How many measurements are needed to estimate accurate daily and annual soil respiration fluxes? Analysis using data from a temperate rainforest

    Science.gov (United States)

    Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás

    2016-12-01

    Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we

  10. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    Science.gov (United States)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi-component target pixel consisting of more than two thermally distinct features should be analyzed.

  11. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    Science.gov (United States)

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results.

  12. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  13. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    Science.gov (United States)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  14. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 2: Internal cell signals and utility for state estimation

    Science.gov (United States)

    Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.

  15. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  16. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  17. FastMG: a simple, fast, and accurate maximum likelihood procedure to estimate amino acid replacement rate matrices from large data sets.

    Science.gov (United States)

    Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si

    2014-10-24

    Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.

  18. Towards accurate dose accumulation for step- and -shoot IMRT. Impact of weighting schemes and temporal image resolution on the estimation of dosimetric motion effects

    Energy Technology Data Exchange (ETDEWEB)

    Werner, Rene; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz [Luebeck Univ. (Germany). Inst. of Medical Informatics; Albers, Dirk; Petersen, Cordula; Cremers, Florian [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Dept. of Radiotherapy and Radio-Oncology; Frenzel, Thorsten [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Health Care Center

    2012-07-01

    Purpose: Breathing-induced motion effects on dose distributions in radiotherapy can be analyzed using 4D CT image sequences and registration-based dose accumulation techniques. Often simplifying assumptions are made during accumulation. In this paper, we study the dosimetric impact of two aspects which may be especially critical for IMRT treatment: the weighting scheme for the dose contributions of IMRT segments at different breathing phases and the temporal resolution of 4D CT images applied for dose accumulation. Methods: Based on a continuous problem formulation a patient- and plan-specific scheme for weighting segment dose contributions at different breathing phases is derived for use in step- and -shoot IMRT dose accumulation. Using 4D CT data sets and treatment plans for 5 lung tumor patients, dosimetric motion effects as estimated by the derived scheme are compared to effects resulting from a common equal weighting approach. Effects of reducing the temporal image resolution are evaluated for the same patients and both weighting schemes. Results: The equal weighting approach underestimates dosimetric motion effects when considering single treatment fractions. Especially interplay effects (relative misplacement of segments due to respiratory tumor motion) for IMRT segments with only a few monitor units are insufficiently represented (local point differences > 25% of the prescribed dose for larger tumor motion). The effects, however, tend to be averaged out over the entire treatment course. Regarding temporal image resolution, estimated motion effects in terms of measures of the CTV dose coverage are barely affected (in comparison to the full resolution) when using only half of the original resolution and equal weighting. In contrast, occurence and impact of interplay effects are poorly captured for some cases (large tumor motion, undersized PTV margin) for a resolution of 10/14 phases and the more accurate patient- and plan-specific dose accumulation scheme

  19. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    Science.gov (United States)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  20. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography: comparison with cine magnetic resonance imaging.

    Science.gov (United States)

    Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L

    2006-07-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.

  1. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography. Comparison with cine magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Coche, Emmanuel [Universite Catholique de Louvain, Division of Radiology, Brussels (Belgium); Gerber, Bernhard L. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Cliniques Universitaires St. Luc UCL, Department of Cardiology, Woluwe St. Lambert (Belgium)

    2006-07-15

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134{+-}51 and 67{+-}56 ml) were similar to those by MR (137{+-}57 and 70{+-}60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55{+-}21 vs. 56{+-}21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3{+-}1.8 vs. 8.8{+-}1.9 mm and 12.7{+-}3.4 vs. 13.3{+-}3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54{+-}30 vs. 51{+-}31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)

  2. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    Science.gov (United States)

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    NARCIS (Netherlands)

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting

  4. Estimating the Economic Value of Ice Climbing in Hyalite Canyon: An Application of Travel Cost Count Data Models that Account for Excess Zeros*

    OpenAIRE

    Anderson, D. Mark

    2009-01-01

    Recently, the sport of ice climbing has seen a drastic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In additi...

  5. Impact of measurement error in radon exposure on the estimated excess relative risk of lung cancer death in a simulated study based on the French Uranium Miners' Cohort

    Energy Technology Data Exchange (ETDEWEB)

    Allodji, Rodrigue S.; Leuraud, Klervi; Laurier, Dominique [Institut de Radioprotection et de Surete Nucleaire (IRSN), DRPH, SRBE, Laboratoire d' Epidemiologie, Fontenay-aux-Roses (France); Thiebaut, Anne C.M. [INSERM, U657, Paris (France); Institut Pasteur, Unite Pharmaco-Epidemiologie et Maladies Infectieuses, Paris (France); Univ. Versailles Saint-Quentin, Garches (France); Henry, Stephane [Medical Council Areva Group, Pierrelatte (France); Benichou, Jacques [INSERM, U657, Rouen (France); Centre Hospitalier Universitaire (CHU) de Rouen, Unite de Biostatistique, Rouen (France); Univ. Rouen, Rouen (France)

    2012-05-15

    Measurement error (ME) can lead to bias in the analysis of epidemiologic studies. Here a simulation study is described that is based on data from the French Uranium Miners' Cohort and that was conducted to assess the effect of ME on the estimated excess relative risk (ERR) of lung cancer death associated with radon exposure. Starting from a scenario without any ME, data were generated containing successively Berkson or classical ME depending on time periods, to reflect changes in the measurement of exposure to radon ({sup 222}Rn) and its decay products over time in this cohort. Results indicate that ME attenuated the level of association with radon exposure, with a negative bias percentage on the order of 60% on the ERR estimate. Sensitivity analyses showed the consequences of specific ME characteristics (type, size, structure, and distribution) on the ERR estimates. In the future, it appears important to correct for ME upon analyzing cohorts such as this one to decrease bias in estimates of the ERR of adverse events associated with exposure to ionizing radiation. (orig.)

  6. An Accurate FOA and TOA Estimation Algorithm for Galileo Search and Rescue Signal%伽利略搜救信号FOA和TOA精确估计算法

    Institute of Scientific and Technical Information of China (English)

    王堃; 吴嗣亮; 韩月涛

    2011-01-01

    According to the high precision demand of Frequency of Arrival(FOA) and Time of Arrival(TOA) estimation in Galileo search and rescue(SAR) system and considering the fact that the message bit width is unknown in real received beacons,a new FOA and TOA estimation algorithm which combines the multi-dimensional joint maximum likelihood estimation algorithm and barycenter calculation algorithm is proposed.The principle of the algorithm is derived after the signal model is introduced,and the concrete realization of the estimation algorithm is given.Monte Carlo simulation results and measurement results show that when CNR equals the threshold of 34.8 dBHz,FOA and TOA estimation rmse(root-mean-square error) of this algorithm are respectively within 0.03 Hz and 9.5 μs,which are better than the system requirements of 0.05 Hz and 11 μs.This algorithm has been applied to the Galileo Medium-altitude Earth Orbit Local User Terminal(MEOLUT station).%针对伽利略搜救系统中到达频率(FOA)和到达时间(TOA)高精度估计的需求,考虑到实际接收的信标信号中信息位宽未知的情况,提出了多维联合极大似然估计算法和体积重心算法相结合的FOA和TOA估计算法。在介绍信号模型的基础上推导了算法原理,给出了估计算法的具体实现过程。Monte Carlo仿真和实测结果表明,在34.8 dBHz的处理门限下,该算法得到的FOA和TOA估计的均方根误差分别小于0.03 Hz和9.5μs,优于0.05 Hz和11μs的系统指标要求。该算法目前已应用于伽利略中轨卫星地面用户终端(MEOLUT地面站)。

  7. Excessive acquisition in hoarding.

    Science.gov (United States)

    Frost, Randy O; Tolin, David F; Steketee, Gail; Fitch, Kristin E; Selbo-Bruns, Alexandra

    2009-06-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an Internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms.

  8. Estimation of cardiovascular risk on routine chest CT: Ordinal coronary artery calcium scoring as an accurate predictor of Agatston score ranges.

    Science.gov (United States)

    Azour, Lea; Kadoch, Michael A; Ward, Thomas J; Eber, Corey D; Jacobi, Adam H

    Coronary artery calcium (CAC) is often identified on routine chest computed tomography (CT). The purpose of our study was to evaluate whether ordinal scoring of CAC on non-gated, routine chest CT is an accurate predictor of Agatston score ranges in a community-based population, and in particular to determine the accuracy of an ordinal score of zero on routine chest CT. Two thoracic radiologists reviewed consecutive same-day ECG-gated and routine non-gated chest CT scans of 222 individuals. CAC was quantified using the Agatston scoring on the ECG-gated scans, and using an ordinal method on routine scans, with a score from 0 to 12. The pattern and distribution of CAC was assessed. The correlation between routine exam ordinal scores and Agatston scores in ECG-gated exams, as well as the accuracy of assigning a zero calcium score on routine chest CT was determined. CAC was most prevalent in the left anterior descending coronary artery in both single and multi-vessel coronary artery disease. There was a strong correlation between the non-gated ordinal and ECG-gated Agatston scores (r = 0.811, p < 0.01). Excellent inter-reader agreement (k = 0.95) was shown for the presence (total ordinal score ≥1) or absence (total ordinal score = 0) of CAC on routine chest CT. The negative predictive value for a total ordinal score of zero on routine CT was 91.6% (95% CI, 85.1-95.9). Total ordinal scores of 0, 1-3, 4-5, and ≥6 corresponded to average Agatston scores of 0.52 (0.3-0.8), 98.7 (78.2-117.1), 350.6 (264.9-436.3) and 1925.4 (1526.9-2323.9). Visual assessment of CAC on non-gated routine chest CT accurately predicts Agatston score ranges, including the zero score, in ECG-gated CT. Inclusion of this information in radiology reports may be useful to convey important information on cardiovascular risk, particularly premature atherosclerosis in younger patients. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights

  9. Is the predicted postoperative FEV1 estimated by planar lung perfusion scintigraphy accurate in patients undergoing pulmonary resection? Comparison of two processing methods.

    Science.gov (United States)

    Caglar, Meltem; Kara, Murat; Aksoy, Tamer; Kiratli, Pinar Ozgen; Karabulut, Erdem; Dogan, Riza

    2010-07-01

    Estimation of postoperative forced expiratory volume in 1 s (FEV1) with radionuclide lung scintigraphy is frequently used to define functional operability in patients undergoing lung resection. We conducted a study to outline the reliability of planar quantitative lung perfusion scintigraphy (QLPS) with two different processing methods to estimate the postoperative lung function in patients with resectable lung disease. Forty-one patients with a mean age of 57 +/- 12 years who underwent either a pneumonectomy (n = 14) or a lobectomy (n = 27) were included in the study. QLPS with Tc-99m macroaggregated albumin was performed. Both three equal zones were generated for each lung [zone method (ZM)] and more precise regions of interest were drawn according to their anatomical shape in the anterior and posterior projections [lobe mapping method (LMM)] for each patient. The predicted postoperative (ppo) FEV1 values were compared with actual FEV1 values measured on postoperative day 1 (pod1 FEV1) and day 7 (pod 7 FEV1). The mean of preoperative FEV1 and ppoFEV1 values was 2.10 +/- 0.57 and 1.57 +/- 0.44 L, respectively. The mean of Pod1FEV1 (1.04 +/- 0.30 L) was lower than ppoFEV1 (p lung disease and hilar tumors. No significant differences were observed between ppoFEV1 values estimated by ZM or by LMM (p > 0.05). PpoFEV1 values predicted by both the zone and LMMs overestimated the actual measured lung volumes in patients undergoing pulmonary resection in the early postoperative period. LMM is not superior to ZM.

  10. How accurately are maximal metabolic equivalents estimated based on the treadmill workload in healthy people and asymptomatic subjects with cardiovascular risk factors?

    Science.gov (United States)

    Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P

    2008-08-01

    Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.

  11. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC) models

    Science.gov (United States)

    Francoeur, Richard B

    2016-01-01

    Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad) mood. In this study, the first wave (2,812 elders) from the New Haven Epidemiological Study of the Elderly (EPESE) was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI]) and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters) simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold) depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27). The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR) in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1) older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes) and 2) older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. PMID:28003768

  12. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC models

    Directory of Open Access Journals (Sweden)

    Francoeur RB

    2016-12-01

    Full Text Available Richard B Francoeur School of Social Work, Adelphi University, Garden City, NY, USA Abstract: Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad mood. In this study, the first wave (2,812 elders from the New Haven Epidemiological Study of the Elderly (EPESE was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI] and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27. The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1 older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes and 2 older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. Keywords: depression, diabetes, overweight, cerebrovascular disease, hypertension, metabolic

  13. Nonaccommodative convergence excess.

    Science.gov (United States)

    von Noorden, G K; Avilla, C W

    1986-01-15

    Nonaccommodative convergence excess is a condition in which a patient has orthotropia or a small-angle esophoria or esotropia at distance and a large-angle esotropia at near, not significantly reduced by the addition of spherical plus lenses. The AC/A ratio, determined with the gradient method, is normal or subnormal. Tonic convergence is suspected of causing the convergence excess in these patients. Nonaccommodative convergence excess must be distinguished from esotropia with a high AC/A ratio and from hypoaccommodative esotropia. In 24 patients treated with recession of both medial recti muscles with and without posterior fixation or by posterior fixation alone, the mean correction of esotropia was 7.4 prism diopters at distance and 17 prism diopters at near.

  14. 基于生存贝塔分布的几何分布参数精确区间估计%Accurate interval estimation of geometric distribution parameter based on survival beta distribution

    Institute of Scientific and Technical Information of China (English)

    徐玉茹; 徐付霞

    2016-01-01

    证明了几何分布参数的充分统计量服从负二项分布,由此将负二项分布转化为生存贝塔分布,构造出了参数的精确置信区间,并且在不同的置信度组合中选出最佳组合,得到精确最短置信区间。讨论了大样本下几何分布的近似区间估计,通过数值模拟,直观展示区间估计的精度变化,说明了精确最短区间估计的优良性。%This paper proved that the sufficient statistic of a geometric distribution parameter is subjected to the negative binomial distribution .Therefore, constructed the exact confi-dence interval of the parameter by converting the negative binomial distribution into the sur -vival beta distribution, and select the best combination in different levels of the confidence to get the accurate shortest confidence interval of its parameter .The approximate interval esti-mate under the large sample of a geometric distribution was discussed in this paper .Through numerical simulation, the change of the accuracy of an interval estimation was intuitively demonstrated, and then the superiority of the accurate shortest confidence interval was illus -trated.

  15. Excessive crying in infants

    Directory of Open Access Journals (Sweden)

    Ricardo Halpern

    2016-06-01

    Full Text Available ABSTRACT Objective: Review the literature on excessive crying in young infants, also known as infantile colic, and its effects on family dynamics, its pathophysiology, and new treatment interventions. Data source: The literature review was carried out in the Medline, PsycINFO, LILACS, SciELO, and Cochrane Library databases, using the terms “excessive crying,” and “infantile colic,” as well technical books and technical reports on child development, selecting the most relevant articles on the subject, with emphasis on recent literature published in the last five years. Summary of the findings: Excessive crying is a common symptom in the first 3 months of life and leads to approximately 20% of pediatric consultations. Different prevalence rates of excessive crying have been reported, ranging from 14% to approximately 30% in infants up to 3 months of age. There is evidence linking excessive crying early in life with adaptive problems in the preschool period, as well as with early weaning, maternal anxiety and depression, attention deficit hyperactivity disorder, and other behavioral problems. Several pathophysiological mechanisms can explain these symptoms, such as circadian rhythm alterations, central nervous system immaturity, and alterations in the intestinal microbiota. Several treatment alternatives have been described, including behavioral measures, manipulation techniques, use of medication, and acupuncture, with controversial results and effectiveness. Conclusion: Excessive crying in the early months is a prevalent symptom; the pediatrician's attention is necessary to understand and adequately manage the problem and offer support to exhausted parents. The prescription of drugs of questionable action and with potential side effects is not a recommended treatment, except in extreme situations. The effectiveness of dietary treatments and use of probiotics still require confirmation. There is incomplete evidence regarding alternative

  16. Excess wind power

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    Expansion of wind power is an important element in Danish climate change abatement policy. Starting from a high penetration of approx 20% however, momentary excess production will become an important issue in the future. Through energy systems analyses using the EnergyPLAN model and economic...... analyses it is analysed how excess productions are better utilised; through conversion into hydrogen of through expansion of export connections thereby enabling sales. The results demonstrate that particularly hydrogen production is unviable under current costs but transmission expansion could...... be profitable particularly if transmission and dispatch companies operate under a feed-in tariff system....

  17. An accurate estimation algorithm for PRI based on remainder of the cycle%一种基于余数周期的 PRI精确估计算法

    Institute of Scientific and Technical Information of China (English)

    苏焕程; 张君; 程良平; 程亦涵; 冷魁

    2016-01-01

    针对传统脉冲重复间隔(PRI)分选算法在估计PRI方面存在的不足,提出了一种对PRI周期信号的周期进行精确估计的算法。该算法首先从待分选脉冲序列中提取出属于一部雷达的脉冲样本,然后利用同余方程的余数周期性质对该雷达脉冲序列的PRI进行精确的估计。相对于传统的PRI估计算法,该算法有效地消除了TOA量化误差对PRI估计造成的影响,可以精确地估计出雷达脉冲序列的准确 PRI数值,从而能够更好地满足信号分选算法的处理需求。理论推导及仿真实验均表明了该算法的有效性。%For the defects in estimating pulse repetition interval (PRI) of traditional PRI de‐interleaving al‐gorithms ,a new algorithm of estimating the period of PRI periodic signals is put forward .The algorithm firstly extracts the pulse sample sequence of certain radar from the raw pulse sequence ,and then executes precise period estimation process based on the remainder of the cycle .Compared with the traditional PRI estimating algorithms ,this algorithm accurately estimates the PRI with TOA quantization error removing , which satisfies the requirement of the de‐interleaving process . Both theoretical derivation and simulation results verify the validity of the proposed algorithm .

  18. The otherness of sexuality: excess.

    Science.gov (United States)

    Stein, Ruth

    2008-03-01

    The present essay, the second of a series of three, aims at developing an experience-near account of sexuality by rehabilitating the idea of excess and its place in sexual experience. It is suggested that various types of excess, such as excess of excitation (Freud), the excess of the other (Laplanche), excess beyond symbolization and the excess of the forbidden object of desire (Leviticus; Lacan) work synergistically to constitute the compelling power of sexuality. In addition to these notions, further notions of excess touch on its transformative potential. Such notions address excess that shatters psychic structures and that is actively sought so as to enable new ones to evolve (Bersani). Work is quoted that regards excess as a way of dealing with our lonely, discontinuous being by using the "excessive" cosmic energy circulating through us to achieve continuity against death (Bataille). Two contemporary analytic thinkers are engaged who deal with the object-relational and intersubjective vicissitudes of excess.

  19. Excess flow shutoff valve

    Energy Technology Data Exchange (ETDEWEB)

    Kiffer, Micah S.; Tentarelli, Stephen Clyde

    2016-02-09

    Excess flow shutoff valve comprising a valve body, a valve plug, a partition, and an activation component where the valve plug, the partition, and activation component are disposed within the valve body. A suitable flow restriction is provided to create a pressure difference between the upstream end of the valve plug and the downstream end of the valve plug when fluid flows through the valve body. The pressure difference exceeds a target pressure difference needed to activate the activation component when fluid flow through the valve body is higher than a desired rate, and thereby closes the valve.

  20. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  1. Topiramate Induced Excessive Sialorrhea

    Directory of Open Access Journals (Sweden)

    Ersel Dag

    2015-11-01

    Full Text Available It is well-known that drugs such as clozapine and lithium can cause sialorrhea. On the other hand, topiramate has not been reported to induce sialorrhea. We report a case of a patient aged 26 who was given antiepileptic and antipsychotic drugs due to severe mental retardation and intractable epilepsy and developed excessive sialorrhea complaint after the addition of topiramate for the control of seizures. His complaints continued for 1,5 years and ended after giving up topiramate. We presented this case since it was a rare sialorrhea case induced by topiramate. Clinicians should be aware of the possibility of sialorrhea development which causes serious hygiene and social problems when they want to give topiramate to the patients using multiple drugs.

  2. Abundance, Excess, Waste

    Directory of Open Access Journals (Sweden)

    Rox De Luca

    2016-02-01

    Her recent work focuses on the concepts of abundance, excess and waste. These concerns translate directly into vibrant and colourful garlands that she constructs from discarded plastics collected on Bondi Beach where she lives. The process of collecting is fastidious, as is the process of sorting and grading the plastics by colour and size. This initial gathering and sorting process is followed by threading the components onto strings of wire. When completed, these assemblages stand in stark contrast to the ease of disposability associated with the materials that arrive on the shoreline as evidence of our collective human neglect and destruction of the environment around us. The contrast is heightened by the fact that the constructed garlands embody the paradoxical beauty of our plastic waste byproducts, while also evoking the ways by which those byproducts similarly accumulate in randomly assorted patterns across the oceans and beaches of the planet.

  3. On the excess energy of nonequilibrium plasma

    Energy Technology Data Exchange (ETDEWEB)

    Timofeev, A. V. [National Research Centre Kurchatov Institute, Institute of Hydrogen Power Engineering and Plasma Technologies (Russian Federation)

    2012-01-15

    The energy that can be released in plasma due to the onset of instability (the excess plasma energy) is estimated. Three potentially unstable plasma states are considered, namely, plasma with an anisotropic Maxwellian velocity distribution of plasma particles, plasma with a two-beam velocity distribution, and an inhomogeneous plasma in a magnetic field with a local Maxwellian velocity distribution. The excess energy can serve as a measure of the degree to which plasma is nonequilibrium. In particular, this quantity can be used to compare plasmas in different nonequilibrium states.

  4. The High Price of Excessive Alcohol Consumption

    Centers for Disease Control (CDC) Podcasts

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  5. Accurate estimation of TOA and calibration of synchronization error for multilateration%多点定位TOA精确估计及同步误差校正算法

    Institute of Scientific and Technical Information of China (English)

    王洪; 金尔文; 刘昌忠; 吴宏刚

    2013-01-01

    提出了S模式信号的数学模型,讨论了脉冲上升沿测量到达时间(time of arrival,TOA)的精度、统计方法估计TOA的最优值和最优估计的实现方法.然后,提出了一种先解码后测量TOA的改进方法,从脉冲积累的角度导出了改进方法的理论精度,与单脉冲测量的精度相比较有明显提高.针对硬件实现的问题,分析了采样对TOA测量的影响和解决方法.最后,讨论了多点定位的同步问题,将TOA的精确估计值应用于多点定位系统多部接收机之间的同步误差校正.%A mathematical model of mode S signals is built. Accuracy of time of arrival (TOA) measurements by pulse rise edge and best statistical estimation methods are discussed. The way to realize the best estimation is also introduced. Then a novel method is proposed to measure the TOA of mode S signals, in which the measurement is performed after the decoding of mode S signals. The accuracy of the proposed method is improved significantly compared with the single pulse measurement, which can be derived from pulse integration. The influence of sampling on TOA measurement is analyzed and the corresponding solving method is introduced. Finally, synchronization in a multilateration system is discussed and the accurate TOA of signals is used for calibration of synchronization errors among receivers.

  6. Changing guards: time to move beyond body mass index for population monitoring of excess adiposity.

    Science.gov (United States)

    Tanamas, S K; Lean, M E J; Combet, E; Vlassopoulos, A; Zimmet, P Z; Peeters, A

    2016-07-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorization of related health risks. Combinations of anthropometric markers or predictive equations may facilitate better use of anthropometric data than single measures to estimate body composition for populations. Here, we provide new evidence that increasing proportions of aging populations are at high health-risk according to waist circumference, but not body mass index (BMI), so continued use of BMI as the principal population-level measure substantially underestimates the health-burden from excess adiposity.

  7. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  8. Liver iron and serum ferritin levels are misleading for estimating cardiac, pancreatic, splenic and total body iron load in thalassemia patients: factors influencing the heterogenic distribution of excess storage iron in organs as identified by MRI T2*.

    Science.gov (United States)

    Kolnagou, Annita; Natsiopoulos, Konstantinos; Kleanthous, Marios; Ioannou, Alexia; Kontoghiorghes, George J

    2013-01-01

    A comparative assessment of excess storage iron distribution in the liver, heart, spleen and pancreas of β-thalassemia major (β-ΤΜ) patients has been carried out using magnetic resonance imaging (MRI) relaxation times T2*. The β-ΤΜ patients (8-40 years, 11 males, 9 females) had variable serum ferritin levels (394-5603 μg/L) and were treated with deferoxamine (n = 10), deferiprone (n = 5) and deferoxamine/deferiprone combination (n = 5). MRI T2* assessment revealed that excess iron is not proportionally distributed among the organs but is stored at different concentrations in each organ and the distribution is different for each β-ΤΜ patient. There is random variation in the distribution of excess storage iron from normal to severe levels in each organ among the β-ΤΜ patients by comparison to the same organs of ten normal volunteers. The correlation of serum ferritin with T2* was for spleen (r = -0.81), liver (r = -0.63), pancreas (r = -0.33) and none with heart. Similar trend was observed in the correlation of liver T2* with the T2* of spleen (r = 0.62), pancreas (r = 0.61) and none with heart. These studies contradict previous assumptions that serum ferritin and liver iron concentration is proportional to the total body iron stores in β-ΤΜ and especially cardiac iron load. The random variation in the concentration of iron in the organs of β-ΤΜ patients appears to be related to the chelation protocol, organ function, genetic, dietary, pharmacological and other factors. Monitoring of the iron load for all the organs is recommended for each β-ΤΜ patient.

  9. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  10. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    Institute of Scientific and Technical Information of China (English)

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  11. Excess attenuation of an acoustic beam by turbulence.

    Science.gov (United States)

    Pan, Naixian

    2003-12-01

    A theory based on the concept of a spatial sinusoidal diffraction grating is presented for the estimation of the excess attenuation in an acoustic beam. The equation of the excess attenuation coefficient shows that the excess attenuation of acoustic beam not only depends on the turbulence but also depends on the application parameters such as the beam width, the beam orientation and whether for forward propagation or back scatter propagation. Analysis shows that the excess attenuation appears to have a frequency dependence of cube-root. The expression for the excess attenuation coefficient has been used in the estimations of the temperature structure coefficient, C(T)2, in sodar sounding. The correction of C(T)2 values for excess attenuation reduces their errors greatly. Published profiles of temperature structure coefficient and the velocity structure coefficient in convective conditions are used to test our theory, which is compared with the theory by Brown and Clifford. The excess attenuation due to scattering from turbulence and atmospheric absorption are both taken into account in sodar data processing for deducing the contribution of the lower atmosphere to seeing, which is the sharpness of a telescope image determined by the degree of turbulence in the Earth's atmosphere. The comparison between the contributions of the lowest 300-m layer to seeing with that of the whole atmosphere supports the reasonableness of our estimation of excess attenuation.

  12. 移动机器人车载摄像机位姿的高精度快速求解%An accurate and fast pose estimation algorithm foron-board camera of mobile robot

    Institute of Scientific and Technical Information of China (English)

    唐庆顺; 吴春富; 李国栋; 王小龙; 周风余

    2015-01-01

    An accurate and fast pose estimation problem for on-board camera of mobile robot is investigated.Firstly the special properties of the pose for on-board camera of mobile robot are analyzed.Secondly,an auxiliary rotation matrix is constructed using the on-board camera’s equivalent rotation axis,which is utilized to turn the initial essential matrix and homography matrix into a simplified kind that can be decomposed through elementary mathematical operations.Fi-nally,some simulation experiments are designed to verify the algorithm’s rapidity,accuracy and robustness.The ex-perimental results show that compared to traditional algorithms,the proposed algorithm can acquire higher accuracy and faster calculating speed,together with the robustness to the disturbance of the on-board camera’s equivalent rotation ax-is.In addition,the number of possible solutions are reduced one half,and the unique rotation angle of the mobile robot can be determined except for the condition that the 3 D planar scene structure and the ground are perpendicular,which can provide great convenience for controlling the pose of the mobile robot.%在分析移动机器人车载摄像机位姿的特殊性质的基础上,根据摄像机的等效转轴构造辅助旋转矩阵,利用该旋转矩阵将原始待分解本质矩阵和单应矩阵转换为一类简单的、可通过初等数学运算进行分解的本质矩阵和单应矩阵。仿真实验的结果表明,该车载摄像机位姿估计算法较传统方法具有更高的精度和更快的运算速度,对摄像机等效转轴的扰动也具有很好的鲁棒性。此外,分解出的可能解的数目较传统算法减少了一半,且在除诱导单应阵的空间景物平面与地面垂直的情况下,均能直接得到移动机器人的唯一转角,为移动机器人姿态控制提供了极大的便利。

  13. Molar heat capacity and molar excess enthalpy measurements in aqueous amine solutions

    Science.gov (United States)

    Poozesh, Saeed

    Experimental measurements of molar heat capacity and molar excess enthalpy for 1, 4-dimethyl piperazine (1, 4-DMPZ), 1-(2-hydroxyethyl) piperazine (1, 2-HEPZ), I-methyl piperazine (1-MPZ), 3-morpholinopropyl amine (3-MOPA), and 4-(2-hydroxy ethyl) morpholine (4, 2-HEMO) aqueous solutions were carried out in a C80 heat flow calorimeter over a range of temperatures from (298.15 to 353.15) K and for the entire range of the mole fractions. The estimated uncertainty in the measured values of the molar heat capacity and molar excess enthalpy was found to be +/- 2%. Among the five amines studied, 3-MOPA had the highest values of the molar heat capacity and 1-MPZ the lowest. Values of molar heat capacities of amines were dominated by --CH 2, --N, --OH, --O, --NH2 groups and increased with increasing temperature, and contributions of --NH and --CH 3 groups decreased with increasing temperature for these cyclic amines. Molar excess heat capacities were calculated from the measured molar heat capacities and were correlated as a function of the mole fractions employing the Redlich-Kister equation. The molar excess enthalpy values were also correlated as a function of the mole fractions employing the Redlich-Kister equation. Molar enthalpies at infinite dilution were derived. Molar excess enthalpy values were modeled using the solution theory models: NRTL (Non Random Two Liquid) and UNIQUAC (UNIversal QUAsi Chemical) and the modified UNIFAC (UNIversal quasi chemical Functional group Activity Coefficients - Dortmund). The modified UNIFAC was found to be the most accurate and reliable model for the representation and prediction of the molar excess enthalpy values. Among the five amines, the 1-MPZ + water system exhibited the highest values of molar excess enthalpy on the negative side. This study confirmed the conclusion made by Maham et al. (71) that -CH3 group contributed to higher molar excess enthalpies. The negative excess enthalpies were reduced due to the contribution of

  14. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  15. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...... of antipronation shoes or insoles, which latest was studied by Kulce DG., et al (2007). So far there have been no randomized controlled studies showing methods that the effect of this treatment has not been documented. Therefore the authors can measure the effect of treatments with insoles. Some of the excessive...

  16. Personalized Recommendation via Suppressing Excessive Diffusion

    Directory of Open Access Journals (Sweden)

    Guilin Chen

    2017-01-01

    Full Text Available Efficient recommendation algorithms are fundamental to solve the problem of information overload in modern society. In physical dynamics, mass diffusion is a powerful tool to alleviate the long-standing problems of recommendation systems. However, popularity bias and redundant similarity have not been adequately studied in the literature, which are essentially caused by excessive diffusion and will lead to similarity estimation deviation and recommendation performance degradation. In this paper, we penalize the popular objects by appropriately dividing the popularity of objects and then leverage the second-order similarity to suppress excessive diffusion. Evaluation on three real benchmark datasets (MovieLens, Amazon, and RYM by 10-fold cross-validation demonstrates that our method outperforms the mainstream baselines in accuracy, diversity, and novelty.

  17. Widespread Excess Ice in Arcadia Planitia, Mars

    CERN Document Server

    Bramson, Ali M; Putzig, Nathaniel E; Sutton, Sarah; Plaut, Jeffrey J; Brothers, T Charles; Holt, John W

    2015-01-01

    The distribution of subsurface water ice on Mars is a key constraint on past climate, while the volumetric concentration of buried ice (pore-filling versus excess) provides information about the process that led to its deposition. We investigate the subsurface of Arcadia Planitia by measuring the depth of terraces in simple impact craters and mapping a widespread subsurface reflection in radar sounding data. Assuming that the contrast in material strengths responsible for the terracing is the same dielectric interface that causes the radar reflection, we can combine these data to estimate the dielectric constant of the overlying material. We compare these results to a three-component dielectric mixing model to constrain composition. Our results indicate a widespread, decameters-thick layer that is excess water ice ~10^4 km^3 in volume. The accumulation and long-term preservation of this ice is a challenge for current Martian climate models.

  18. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M;

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... is in pain but the effect of this treatment has not been documented. Therefore the authors wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  19. Does excessive pronation cause pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, R.G.; Rathleff, M.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... pronation patients recieve antipronation training often if the patient is in pain but wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  20. [Mortality attributable to excess weight in Spain].

    Science.gov (United States)

    Martín-Ramiro, José Javier; Álvarez-Martín, Elena; Gil-Prieto, Ruth

    2014-06-16

    Estimate the mortality attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey, while data on associated mortality were extracted from the National Statistic Institute. Population attributable fractions were applied and mortality attributable to higher than optimal body mass index was calculated for people between 35 and 79 years. In 2006, among the Spanish population aged 35-79 years, 25,671 lives (16,405 males and 9,266 women) were lost due to higher than optimal body mass index. Mortality attributable was 15.8% of total deaths in males and 14.8% in women, but if we refer to those causes where excess body weight is a risk factor, it is about a 30% of mortality (31.6% in men and 28% in women). The most important individual cause was cardiovascular disease (58%), followed by cancer. The individual cause with a major contribution to deaths was type 2 diabetes; nearly 70% in males and 80% in women. Overweight accounted for 54.9% deaths in men and 48.6% in women. Excess body weight is a major public health problem, with an important associated mortality. Attributable deaths are a useful tool to know the real situation and to monitor for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  1. Excess Early Mortality in Schizophrenia

    DEFF Research Database (Denmark)

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality as ...

  2. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M

    of antipronation shoes or insoles, which latest was studied by Kulce DG., et al (2007). So far there have been no randomized controlled studies showing methods that can measure the effect of treatments with insoles. Some of the excessive pronation patients recieve antipronation training often if the patient...

  3. Excessive masturbation after epilepsy surgery.

    Science.gov (United States)

    Ozmen, Mine; Erdogan, Ayten; Duvenci, Sirin; Ozyurt, Emin; Ozkara, Cigdem

    2004-02-01

    Sexual behavior changes as well as depression, anxiety, and organic mood/personality disorders have been reported in temporal lobe epilepsy (TLE) patients before and after epilepsy surgery. The authors describe a 14-year-old girl with symptoms of excessive masturbation in inappropriate places, social withdrawal, irritability, aggressive behavior, and crying spells after selective amygdalohippocampectomy for medically intractable TLE with hippocampal sclerosis. Since the family members felt extremely embarrassed, they were upset and angry with the patient which, in turn, increased her depressive symptoms. Both her excessive masturbation behavior and depressive symptoms remitted within 2 months of psychoeducative intervention and treatment with citalopram 20mg/day. Excessive masturbation is proposed to be related to the psychosocial changes due to seizure-free status after surgery as well as other possible mechanisms such as Kluver-Bucy syndrome features and neurophysiologic changes associated with the cessation of epileptic discharges. This case demonstrates that psychiatric problems and sexual changes encountered after epilepsy surgery are possibly multifactorial and in adolescence hypersexuality may be manifested as excessive masturbation behavior.

  4. Excess mortality following hip fracture

    DEFF Research Database (Denmark)

    Abrahamsen, B; van Staa, T; Ariely, R;

    2009-01-01

    Summary This systematic literature review has shown that patients experiencing hip fracture after low-impact trauma are at considerable excess risk for death compared with nonhip fracture/community control populations. The increased mortality risk may persist for several years thereafter, highlig...

  5. Determination of Enantiomeric Excess of Glutamic Acids by Lab-made Capillary Array Electrophoresis

    Institute of Scientific and Technical Information of China (English)

    Jun WANG; Kai Ying LIU; Li WANG; Ji Ling BAI

    2006-01-01

    Simulated enantiomeric excess of glutamic acid was determined by a lab-made sixteen-channel capillary array electrophoresis with confocal fluorescent rotary scanner. The experimental results indicated that the capillary array electrophoresis method can accurately determine the enantiomeric excess of glutamic acid and can be used for high-throughput screening system for combinatorial asymmetric catalysis.

  6. Severe rhabdomyolysis after excessive bodybuilding.

    Science.gov (United States)

    Finsterer, J; Zuntner, G; Fuchs, M; Weinberger, A

    2007-12-01

    A 46-year-old male subject performed excessive physical exertion during 4-6 h in a studio for body builders during 5 days. He was not practicing sport prior to this training and denied the use of any aiding substances. Despite muscle aching already after 1 day, he continued the exercises. After the last day, he recognized tiredness and cessation of urine production. Two days after discontinuation of the training, a Herpes simplex infection occurred. Because of acute renal failure, he required hemodialysis. There were absent tendon reflexes and creatine kinase (CK) values up to 208 274 U/L (normal: <170 U/L). After 2 weeks, CK had almost normalized and, after 4 weeks, hemodialysis was discontinued. Excessive muscle training may result in severe, hemodialysis-dependent rhabdomyolysis. Triggering factors may be prior low fitness level, viral infection, or subclinical metabolic myopathy.

  7. The Cosmic Ray Electron Excess

    Science.gov (United States)

    Chang, J.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Christl, M.; Ganel, O.; Guzik, T. G.; Isbert, J.; Kim, K. C.; Kuznetsov, E. N.; Panasyuk, M. I.; Panov, A. D.; Schmidt, W. K. H.; Seo, E. S.; Sokolskaya, N. V.; Watts, J. W.; Wefel, J. P.; Wu, J.; Zatsepin, V. I.

    2008-01-01

    This slide presentation reviews the possible sources for the apparent excess of Cosmic Ray Electrons. The presentation reviews the Advanced Thin Ionization Calorimeter (ATIC) instrument, the various parts, how cosmic ray electrons are measured, and shows graphs that review the results of the ATIC instrument measurement. A review of Cosmic Ray Electrons models is explored, along with the source candidates. Scenarios for the excess are reviewed: Supernova remnants (SNR) Pulsar Wind nebulae, or Microquasars. Each of these has some problem that mitigates the argument. The last possibility discussed is Dark Matter. The Anti-Matter Exploration and Light-nuclei Astrophysics (PAMELA) mission is to search for evidence of annihilations of dark matter particles, to search for anti-nuclei, to test cosmic-ray propagation models, and to measure electron and positron spectra. There are slides explaining the results of Pamela and how to compare these with those of the ATIC experiment. Dark matter annihilation is then reviewed, which represent two types of dark matter: Neutralinos, and kaluza-Kline (KK) particles, which are next explained. The future astrophysical measurements, those from GLAST LAT, the Alpha Magnetic Spectrometer (AMS), and HEPCAT are reviewed, in light of assisting in finding an explanation for the observed excess. Also the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) could help by revealing if there are extra dimensions.

  8. Diphoton Excess through Dark Mediators

    CERN Document Server

    Chen, Chien-Yi; Pospelov, Maxim; Zhong, Yi-Ming

    2016-01-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated $e^+e^-$ pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: $gg \\to S\\to A'A'\\to (e^+e^-)(e^+e^-)$ and $q\\bar q \\to Z' \\to sa\\to (e^+e^-)(e^+e^-)$, where at the first step a heavy scalar, $S$, or vector, $Z'$, resonances are produced that decay to light metastable vectors, $A'$, or (pseudo-)scalars, $s$ and $a$. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heav...

  9. 12 CFR 925.23 - Excess stock.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Excess stock. 925.23 Section 925.23 Banks and... BANKS Stock Requirements § 925.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of this section, a member may purchase excess stock as long as the purchase is approved by...

  10. 10 CFR 904.9 - Excess capacity.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF... Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall be entitled to such Excess Capacity to integrate the operation of the Boulder City Area Projects and...

  11. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  12. [Disability attributable to excess weight in Spain].

    Science.gov (United States)

    Martín-Ramiro, José Javier; Alvarez-Martín, Elena; Gil-Prieto, Ruth

    2014-08-19

    To estimate the disability attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey (NHS), while the prevalence of associated morbidities was extracted from the 2006 NHS and from a national hospital data base. Population attributable fractions were applied and disability attributable was expressed as years life with disability (YLD). In 2006, in the Spanish population aged 35-79 years, 791.650 YLD were lost due to higher than optimal body mass index (46.7% in males and 53.3% in females). Overweight (body mass index 25-29.9) accounted for 45.7% of total YLD. Males YLD were higher than females under 60. The 35-39 quinquennial group showed a difference for males of 16.6% while in the 74-79 group the difference was 23.8% for women. Osteoarthritis and chronic back pain accounted for 60% of YLD while hypertensive disease and type 2 diabetes mellitus were responsible of 37%. Excess body weight is a health risk related to the development of various diseases with an important associated disability burden and social and economical cost. YLD analysis is a useful monitor tool for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  13. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  14. Evaluation of Excess Thermodynamic Parameters in a Binary Liquid Mixture (Cyclohexane + O-Xylene at Different Temperatures

    Directory of Open Access Journals (Sweden)

    K. Narendra

    2010-01-01

    Full Text Available The ultrasonic velocity, density and viscosity in binary liquid mixture cyclohexane with o-xylene have been determined at different temperatures from 303.15 to 318.15 K over the whole composition range. The data have been utilized to estimate the excess adiabatic compressibility (βE, excess volumes (VE, excess intermolecular free length (LfE, excess internal pressure (πE and excess enthalpy (HE at the above temperatures. The excess values have been found to be useful in estimating the strength of the interactions in the liquid mixtures. Analysis of these parameters indicates that there are weak interactions among the components of the binary mixtures.

  15. Excessive infant crying: The impact of varying definitions

    NARCIS (Netherlands)

    Reijneveld, S.A.; Brugman, E.; Hirasing, R.A.

    2001-01-01

    Objective. To assess the impact of varying definitions of excessive crying and infantile colic on prevalence estimates and to assess to what extent these definitions comprise the same children. Methods. Parents of 3345 infants aged 1, 3, and 6 months (response: 96.5%) were interviewed on the crying

  16. Excessive infant crying : The impact of varying definitions

    NARCIS (Netherlands)

    Reijneveld, SA; Brugman, E; Hirasing, RA

    2001-01-01

    Objective. To assess the impact of varying definitions of excessive crying and infantile colic on prevalence estimates and to assess to what extent these definitions comprise the same children. Methods. Parents of 3345 infants aged 1, 3, and 6 months (response: 96.5%) were interviewed on the crying

  17. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Science.gov (United States)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-07-01

    A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs

  18. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Directory of Open Access Journals (Sweden)

    L. S. Schmidt

    2017-07-01

    Full Text Available A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980–2014, is used to estimate the evolution of the glacier surface mass balance (SMB. This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs from the period 2001–2014, as well as in situ SMB measurements from the period 1995–2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995–2014 shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981–2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes

  19. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  20. Armodafinil for excessive daytime sleepiness.

    Science.gov (United States)

    Nishino, Seiji; Okuro, Masashi

    2008-06-01

    Armodafinil is the (R)-enantiomer of the wakepromoting compound modafinil (racemic), with a considerably longer half-life of 10-15 hours. Armodafinil (developed by Cephalon, Frazer, PA, USA) was approved in June 2007 for the treatment of excessive sleepiness associated with narcolepsy, obstructive sleep apnea syndrome and shift work disorder, and the indications are the same as those for modafinil. Like modafinil, the mechanisms of action of armodafinil are not fully characterized and are under debate. Clinical trials in these sleep disorders demonstrated an enhanced efficacy for wake promotion (wake sustained for a longer time period using doses lower than those of modafinil). The safety profile is consistent with that of modafinil, and armodafinil is well tolerated by the patients. Like modafinil, armodafinil is classified as a non-narcotic Schedule IV compound. Many patients with excessive sleepiness may prefer the longer duration of effect and may have better compliance (with low doses) with armodafinil. The commercial challenge to armodafinil may come from generic modafinil, which may become available in 2012, as well as from classical amphetamine and amphetamine-like compounds (for the treatment of narcolepsy).

  1. Excess electron transport in cryoobjects

    CERN Document Server

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  2. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    NARCIS (Netherlands)

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  3. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    NARCIS (Netherlands)

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  4. Excess water dynamics in hydrotalcite: QENS study

    Indian Academy of Sciences (India)

    S Mitra; A Pramanik; D Chakrabarty; R Mukhopadhyay

    2004-08-01

    Results of the quasi-elastic neutron scattering (QENS) measurements on the dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random translational jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the change in binding of the water molecules to the host layer.

  5. 10 CFR 904.10 - Excess energy.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be available...

  6. 7 CFR 985.56 - Excess oil.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified...

  7. A Discussion on Mean Excess Plots

    CERN Document Server

    Ghosh, Souvik

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a Generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  8. Accurate Stellar Parameters for Exoplanet Host Stars

    Science.gov (United States)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  9. NNLOPS accurate associated HW production

    CERN Document Server

    Astill, William; Re, Emanuele; Zanderighi, Giulia

    2016-01-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.

  10. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Science.gov (United States)

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-01

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a

  11. Pharmacological treatment of aldosterone excess

    NARCIS (Netherlands)

    Deinum, J.; Riksen, N.P.; Lenders, J.W.M.

    2015-01-01

    Primary aldosteronism, caused by autonomous secretion of aldosterone by the adrenals, is estimated to account for at least 5% of hypertension cases. Hypertension explains the considerable cardiovascular morbidity caused by aldosteronism only partly, calling for specific anti-aldosterone drugs. The

  12. Phytoextraction of excess soil phosphorus

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  13. Diphoton Excess and Running Couplings

    CERN Document Server

    Bae, Kyu Jung; Hamaguchi, Koichi; Moroi, Takeo

    2016-01-01

    The recently observed diphoton excess at the LHC may suggest the existence of a singlet (pseudo-) scalar particle with a mass of 750 GeV which couples to gluons and photons. Assuming that the couplings to gluons and photons originate from loops of fermions and/or scalars charged under the Standard Model gauge groups, we show that here is a model-independent upper bound on the cross section $\\sigma(pp\\to S\\to \\gamma\\gamma)$ as a function of the cutoff scale $\\Lambda$ and masses of the fermions and scalars in the loop. Such a bound comes from the fact that the contribution of each particle to the diphoton event amplitude is proportional to its contribution to the one-loop $\\beta$ functions of the gauge couplings. We also investigate the perturbativity of running Yukawa couplings in models with fermion loops, and show the upper bounds on $\\sigma(pp\\to S\\to \\gamma\\gamma)$ for explicit models.

  14. 41 CFR 102-36.305 - May we abandon or destroy excess personal property without reporting it to GSA?

    Science.gov (United States)

    2010-07-01

    ... written determination that the property has no commercial value or the estimated cost of its continued... destroy excess personal property without reporting it to GSA? 102-36.305 Section 102-36.305 Public... MANAGEMENT REGULATION PERSONAL PROPERTY 36-DISPOSITION OF EXCESS PERSONAL PROPERTY Disposition of Excess...

  15. Sparse component separation for accurate CMB map estimation

    CERN Document Server

    Bobin, J; Sureau, F; Basak, S

    2012-01-01

    The Cosmological Microwave Background (CMB) is of premier importance for the cosmologists to study the birth of our universe. Unfortunately, most CMB experiments such as COBE, WMAP or Planck do not provide a direct measure of the cosmological signal; CMB is mixed up with galactic foregrounds and point sources. For the sake of scientific exploitation, measuring the CMB requires extracting several different astrophysical components (CMB, Sunyaev-Zel'dovich clusters, galactic dust) form multi-wavelength observations. Mathematically speaking, the problem of disentangling the CMB map from the galactic foregrounds amounts to a component or source separation problem. In the field of CMB studies, a very large range of source separation methods have been applied which all differ from each other in the way they model the data and the criteria they rely on to separate components. Two main difficulties are i) the instrument's beam varies across frequencies and ii) the emission laws of most astrophysical components vary a...

  16. 31 CFR 205.24 - How are accurate estimates maintained?

    Science.gov (United States)

    2010-07-01

    ...) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE RULES AND PROCEDURES FOR EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a... justify in writing that it is not feasible to use a more efficient basis for determining the amount...

  17. Accurate estimation of the elastic properties of porous fibers

    Energy Technology Data Exchange (ETDEWEB)

    Thissell, W.R.; Zurek, A.K.; Addessio, F.

    1997-05-01

    A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.

  18. Androgen excess: Investigations and management.

    Science.gov (United States)

    Lizneva, Daria; Gavrilova-Jordan, Larisa; Walker, Walidah; Azziz, Ricardo

    2016-11-01

    Androgen excess (AE) is a key feature of polycystic ovary syndrome (PCOS) and results in, or contributes to, the clinical phenotype of these patients. Although AE will contribute to the ovulatory and menstrual dysfunction of these patients, the most recognizable sign of AE includes hirsutism, acne, and androgenic alopecia or female pattern hair loss (FPHL). Evaluation includes not only scoring facial and body terminal hair growth using the modified Ferriman-Gallwey method but also recording and possibly scoring acne and alopecia. Moreover, assessment of biochemical hyperandrogenism is necessary, particularly in patients with unclear or absent hirsutism, and will include assessing total and free testosterone (T), and possibly dehydroepiandrosterone sulfate (DHEAS) and androstenedione, although these latter contribute limitedly to the diagnosis. Assessment of T requires use of the highest quality assays available, generally radioimmunoassays with extraction and chromatography or mass spectrometry preceded by liquid or gas chromatography. Management of clinical hyperandrogenism involves primarily either androgen suppression, with a hormonal combination contraceptive, or androgen blockade, as with an androgen receptor blocker or a 5α-reductase inhibitor, or a combination of the two. Medical treatment should be combined with cosmetic treatment including topical eflornithine hydrochloride and short-term (shaving, chemical depilation, plucking, threading, waxing, and bleaching) and long-term (electrolysis, laser therapy, and intense pulse light therapy) cosmetic treatments. Generally, acne responds to therapy relatively rapidly, whereas hirsutism is slower to respond, with improvements observed as early as 3 months, but routinely only after 6 or 8 months of therapy. Finally, FPHL is the slowest to respond to therapy, if it will at all, and it may take 12 to 18 months of therapy for an observable response.

  19. High Foreign Exchange Reserves Fuel Excess Liquidity

    Institute of Scientific and Technical Information of China (English)

    唐双宁

    2008-01-01

    This article views China’s excess liquidity problem in the global context. It suggests that market mechanisms, cooperation between all parties involved, and liquidity diversion, be resorted to in order to tackle the problem of excessive liquidity. This article also points out that the top priority is to solve the major problems, such as the current account surplus, the sources of excessive liquidity, the shortage of capital in rural areas, and the cause of capital distribution imbalance.

  20. Factors associated with excessive polypharmacy in older people

    OpenAIRE

    Walckiers, Denise; Van Der Heyden, Johan; Tafforeau, Jean

    2015-01-01

    Background Older people are a growing population. They live longer, but often have multiple chronic diseases. As a consequence, they are taking many different kind of medicines, while their vulnerability to pharmaceutical products is increased. The objective of this study is to describe the medicine utilization pattern in people aged 65 years and older in Belgium, and to estimate the prevalence and the determinants of excessive polypharmacy. Methods Data were used from the Belgian Health Inte...

  1. Initial report on characterization of excess highly enriched uranium

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  2. [Excess mortality associated with influenza in Spain in winter 2012].

    Science.gov (United States)

    León-Gómez, Inmaculada; Delgado-Sanz, Concepción; Jiménez-Jorge, Silvia; Flores, Víctor; Simón, Fernando; Gómez-Barroso, Diana; Larrauri, Amparo; de Mateo Ontañón, Salvador

    2015-01-01

    An excess of mortality was detected in Spain in February and March 2012 by the Spanish daily mortality surveillance system and the «European monitoring of excess mortality for public health action» program. The objective of this article was to determine whether this excess could be attributed to influenza in this period. Excess mortality from all causes from 2006 to 2012 were studied using time series in the Spanish daily mortality surveillance system, and Poisson regression in the European mortality surveillance system, as well as the FluMOMO model, which estimates the mortality attributable to influenza. Excess mortality due to influenza and pneumonia attributable to influenza were studied by a modification of the Serfling model. To detect the periods of excess, we compared observed and expected mortality. In February and March 2012, both the Spanish daily mortality surveillance system and the European mortality surveillance system detected a mortality excess of 8,110 and 10,872 deaths (mortality ratio (MR): 1.22 (95% CI:1.21-1.23) and 1.32 (95% CI: 1.29-1.31), respectively). In the 2011-12 season, the FluMOMO model identified the maximum percentage (97%) of deaths attributable to influenza in people older than 64 years with respect to the mortality total associated with influenza (13,822 deaths). The rate of excess mortality due to influenza and pneumonia and respiratory causes in people older than 64 years, obtained by the Serfling model, also reached a peak in the 2011-2012 season: 18.07 and 77.20, deaths per 100,000 inhabitants, respectively. A significant increase in mortality in elderly people in Spain was detected by the Spanish daily mortality surveillance system and by the European mortality surveillance system in the winter of 2012, coinciding with a late influenza season, with a predominance of the A(H3N2) virus, and a cold wave in Spain. This study suggests that influenza could have been one of the main factors contributing to the mortality excess

  3. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Science.gov (United States)

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy.

  4. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  5. The excessively crying infant : etiology and treatment

    NARCIS (Netherlands)

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause i

  6. Excessive libido in a woman with rabies.

    OpenAIRE

    Dutta, J. K.

    1996-01-01

    Rabies is endemic in India in both wildlife and humans. Human rabies kills 25,000 to 30,000 persons every year. Several types of sexual manifestations including excessive libido may develop in cases of human rabies. A laboratory proven case of rabies in an Indian woman who manifested excessive libido is presented below. She later developed hydrophobia and died.

  7. Bladder calculus presenting as excessive masturbation.

    Science.gov (United States)

    De Alwis, A C D; Senaratne, A M R D; De Silva, S M P D; Rodrigo, V S D

    2006-09-01

    Masturbation in childhood is a normal behaviour which most commonly begins at 2 months of age, and peaks at 4 years and in adolescence. However excessive masturbation causes anxiety in parents. We describe a boy with a bladder calculus presenting as excessive masturbation.

  8. Triboson interpretations of the ATLAS diboson excess

    CERN Document Server

    Aguilar-Saavedra, J A

    2015-01-01

    The ATLAS excess in fat jet pair production is kinematically compatible with the decay of a heavy resonance into two gauge bosons plus an extra particle. This possibility would explain the absence of such a localised excess in the analogous CMS analysis of fat dijet final states, as well as the negative results of diboson resonance searches in the semi-leptonic decay modes.

  9. Cardiovascular investigations of airline pilots with excessive cardiovascular risk.

    Science.gov (United States)

    Wirawan, I Made Ady; Aldington, Sarah; Griffiths, Robin F; Ellis, Chris J; Larsen, Peter D

    2013-06-01

    This study examined the prevalence of airline pilots who have an excessive cardiovascular disease (CVD) risk score according to the New Zealand Guideline Group (NZGG) Framingham-based Risk Chart and describes their cardiovascular risk assessment and investigations. A cross-sectional study was performed among 856 pilots employed in an Oceania based airline. Pilots with elevated CVD risk that had been previously evaluated at various times over the previous 19 yr were reviewed retrospectively from the airline's medical records, and the subsequent cardiovascular investigations were then described. There were 30 (3.5%) pilots who were found to have 5-yr CVD risk score of 10-15% or higher. Of the 29 pilots who had complete cardiac investigations data, 26 pilots underwent exercise electrocardiography (ECG), 2 pilots progressed directly to coronary angiograms and 1 pilot with abnormal echocardiogram was not examined further. Of the 26 pilots, 7 had positive or borderline exercise tests, all of whom subsequently had angiograms. One patient with a negative exercise test also had a coronary angiogram. Of the 9 patients who had coronary angiograms as a consequence of screening, 5 had significant disease that required treatment and 4 had either trivial disease or normal coronary arteries. The current approach to investigate excessive cardiovascular risk in pilots relies heavily on exercise electrocardiograms as a diagnostic test, and may not be optimal either to detect disease or to protect pilots from unnecessary invasive procedures. A more comprehensive and accurate cardiac investigation algorithm to assess excessive CVD risk in pilots is required.

  10. Efficient and accurate fragmentation methods.

    Science.gov (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S

    2014-09-16

    Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum

  11. Excessive crying in infants with regulatory disorders.

    Science.gov (United States)

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents.

  12. Same-Sign Dilepton Excesses and Light Top Squarks

    CERN Document Server

    Huang, Peisi; Low, Ian; Wagner, Carlos E M

    2015-01-01

    Run 1 data of the Large Hadron Collider (LHC) contain excessive events in the same-sign dilepton channel with b-jets and missing transverse energy (MET), which were observed by five separate analyses from ATLAS and CMS collaborations. We show that these events could be explained by direct production of top squarks (stops) in supersymmetry. In particular, a right-handed stop with a mass of 550 GeV decaying into 2 t quarks, 2 W bosons, and MET could fit the observed excess without being constrained by other direct search limits from Run 1. We propose kinematic cuts at 13 TeV to enhance the stop signal, and estimate that stops could be discovered with 40 inverse fb of integrated luminosity at Run 2 of the LHC, when considering only the statistical uncertainty.

  13. Genetics Home Reference: aromatase excess syndrome

    Science.gov (United States)

    ... Sources for This Page Fukami M, Shozu M, Ogata T. Molecular bases and phenotypic determinants of aromatase ... T, Nishigaki T, Yokoya S, Binder G, Horikawa R, Ogata T. Aromatase excess syndrome: identification of cryptic duplications ...

  14. Controlling police (excessive force: The American case

    Directory of Open Access Journals (Sweden)

    Zakir Gül

    2013-09-01

    Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.

  15. Romanian welfare state between excess and failure

    Directory of Open Access Journals (Sweden)

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  16. Phospholipids as Biomarkers for Excessive Alcohol Use

    Science.gov (United States)

    2013-10-01

    S.T., Bauman, K.E., & Foshee, V. A. (2005). Neighborhood Influences on Adolescent Cigarette and Alcohol Use: Mediating Effects through Parent and...AWARD NUMBER: W81XWH-12-1-0497 TITLE: Phospholipids as Biomarkers for Excessive Alcohol Use...NUMBER Phospholipids as Biomarkers for Excessive Alcohol Use 5b. GRANT NUMBER W81XWH-12-1-0497 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  17. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  18. Cancers attributable to excess body weight in Canada in 2010

    Directory of Open Access Journals (Sweden)

    Dianne Zakaria

    2017-07-01

    Full Text Available Introduction: Excess body weight (body mass index [BMI] ≥ 25.00 kg/m2 is an established risk factor for diabetes, hypertension and cardiovascular disease, but its relationship to cancer is lesser-known. This study used population attributable fractions (PAFs to estimate the cancer burden attributable to excess body weight in Canadian adults (aged 25+ years in 2010. Methods: We estimated PAFs using relative risk (RR estimates from the World Cancer Research Fund International Continuous Update Project, BMI-based estimates of overweight (25.00 kg/m2–29.99 kg/m2 and obesity (30.00+ kg/m2 from the 2000–2001 Canadian Community Health Survey, and cancer case counts from the Canadian Cancer Registry. PAFs were based on BMI corrected for the bias in self-reported height and weight. Results: In Canada in 2010, an estimated 9645 cancer cases were attributable to excess body weight, representing 5.7% of all cancer cases (males 4.9%, females 6.5%. When limiting the analysis to types of cancer associated with high BMI, the PAF increased to 14.9% (males 17.5%, females 13.3%. Types of cancer with the highest PAFs were esophageal adenocarcinoma (42.2%, kidney (25.4%, gastric cardia (20.7%, liver (20.5%, colon (20.5% and gallbladder (20.2% for males, and esophageal adenocarcinoma (36.1%, uterus (35.2%, gallbladder (23.7% and kidney (23.0% for females. Types of cancer with the greatest number of attributable cases were colon (1445, kidney (780 and advanced prostate (515 for males, and uterus (1825, postmenopausal breast (1765 and colon (675 for females. Irrespective of sex or type of cancer, PAFs were highest in the Prairies (except Alberta and the Atlantic region and lowest in British Columbia and Quebec. Conclusion: The cancer burden attributable to excess body weight is substantial and will continue to rise in the near future because of the rising prevalence of overweight and obesity in Canada.

  19. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  20. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  1. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  2. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  3. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  4. Towards an accurate bioimpedance identification

    Science.gov (United States)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  5. Excess body weight increases the burden of age-associated chronic diseases and their associated health care expenditures.

    Science.gov (United States)

    Atella, Vincenzo; Kopinska, Joanna; Medea, Gerardo; Belotti, Federico; Tosti, Valeria; Mortari, Andrea Piano; Cricelli, Claudio; Fontana, Luigi

    2015-10-01

    Aging and excessive adiposity are both associated with an increased risk of developing multiple chronic diseases, which drive ever increasing health costs. The main aim of this study was to determine the net (non-estimated) health costs of excessive adiposity and associated age-related chronic diseases. We used a prevalence-based approach that combines accurate data from the Health Search CSD-LPD, an observational dataset with patient records collected by Italian general practitioners and up-to-date health care expenditures data from the SiSSI Project. In this very large study, 557,145 men and women older than 18 years were observed at different points in time between 2004 and 2010. The proportion of younger and older adults reporting no chronic disease decreased with increasing BMI. After adjustment for age, sex, geographic residence, and GPs heterogeneity, a strong J-shaped association was found between BMI and total health care costs, more pronounced in middle-aged and older adults. Relative to normal weight, in the 45-64 age group, the per-capita total cost was 10% higher in overweight individuals, and 27 to 68% greater in patients with obesity and very severe obesity, respectively. The association between BMI and diabetes, hypertension and cardiovascular disease largely explained these elevated costs.

  6. Linear ion trap imperfection and the compensation of excess micromotion

    Institute of Scientific and Technical Information of China (English)

    Xie Yi; Wan Wei; Zhou Fei; Chen Liang; Li Chao-Hong; Feng Mang

    2012-01-01

    Quantum computing requires ultracold ions in a ground vibrational state,which is achieved by sideband cooling.We report our recent efforts towards the Lamb-Dicke regime which is a prerequisite of sideband cooling.We first analyse the possible imperfection in our linear ion trap setup and then demonstrate how to suppress the imperfection by compensating the excess micromotion of the ions.The ions,after the micromotion compensation,are estimated to be very close to the Doppler-cooling limit.

  7. Phenomenology and psychopathology of excessive indoor tanning.

    Science.gov (United States)

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning.

  8. Antidepressant induced excessive yawning and indifference

    Directory of Open Access Journals (Sweden)

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  9. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  10. Excess deaths during the 2004 heatwave in Brisbane, Australia.

    Science.gov (United States)

    Tong, Shilu; Ren, Cizao; Becker, Niels

    2010-07-01

    The paper examines whether there was an excess of deaths and the relative role of temperature and ozone in a heatwave during 7-26 February 2004 in Brisbane, Australia, a subtropical city accustomed to warm weather. The data on daily counts of deaths from cardiovascular disease and non-external causes, meteorological conditions, and air pollution in Brisbane from 1 January 2001 to 31 October 2004 were supplied by the Australian Bureau of Statistics, Australian Bureau of Meteorology, and Queensland Environmental Protection Agency, respectively. The relationship between temperature and mortality was analysed using a Poisson time series regression model with smoothing splines to control for nonlinear effects of confounding factors. The highest temperature recorded in the 2004 heatwave was 42 degrees C compared with the highest recorded temperature of 34 degrees C during the same periods of 2001-2003. There was a significant relationship between exposure to heat and excess deaths in the 2004 heatwave [estimated increase in non-external deaths: 75 ([95% confidence interval, CI: 11-138; cardiovascular deaths: 41 (95% CI: -2 to 84)]. There was no apparent evidence of substantial short-term mortality displacement. The excess deaths were mainly attributed to temperature but exposure to ozone also contributed to these deaths.

  11. Same-Sign Dilepton Excesses and Vector-like Quarks

    CERN Document Server

    Chen, Chuan-Ren; Low, Ian

    2015-01-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  12. Minimal Dilaton Model and the Diphoton Excess

    CERN Document Server

    Agarwal, Bakul; Mohan, Kirtimaan A

    2016-01-01

    In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.

  13. Singlet Scalar Resonances and the Diphoton Excess

    CERN Document Server

    McDermott, Samuel D; Ramani, Harikrishnan

    2015-01-01

    ATLAS and CMS recently released the first results of searches for diphoton resonances in 13 TeV data, revealing a modest excess at an invariant mass of approximately 750 GeV. We find that it is generically possible that a singlet scalar resonance is the origin of the excess while avoiding all other constraints. We highlight some of the implications of this model and how compatible it is with certain features of the experimental results. In particular, we find that the very large total width of the excess is difficult to explain with loop-level decays alone, pointing to other interesting bounds and signals if this feature of the data persists. Finally we comment on the robust Z-gamma signature that will always accompany the model we investigate.

  14. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  15. Software Cost Estimation Review

    OpenAIRE

    Ongere, Alphonce

    2013-01-01

    Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...

  16. New Galaxies with UV Excess. VI

    Science.gov (United States)

    Kazarian, M. A.; Petrosian, G. V.

    2005-07-01

    A list is presented of 122 new galaxies with UV excess observed on plates obtained using the 40″ Schmidt telescope at the Byurakan Observatory with a 1°.5 objective prism. It is shown that the relative number of galaxies with a strong UV excess (classes 1 and 2) listed in Table 1 is roughly 55.7%. This is 6.7% higher than for the previously observed galaxies. These samples also differ in terms of the morphology of the spectra. The largest deviation, approximately 9.9%, occurs for type “sd.”

  17. Photoreceptor damage following exposure to excess riboflavin.

    Science.gov (United States)

    Eckhert, C D; Hsu, M H; Pang, N

    1993-12-15

    Flavins generate oxidants during metabolism and when exposed to light. Here we report that the photoreceptor layer of retinas from black-eyed rats is reduced in size by a dietary regime containing excess riboflavin. The effect of excess riboflavin was dose-dependent and was manifested by a decrease in photoreceptor length. This decrease was due in part to a reduction in the thickness of the outer nuclear layer, a structure formed from stacked photoreceptor nuclei. These changes were accompanied by an increase in photoreceptor outer segment autofluorescence following illumination at 328 nm, a wavelength that corresponds to the excitation maxima of oxidized lipopigments of the retinal pigment epithelium.

  18. Low excess air operations of oil boilers

    Energy Technology Data Exchange (ETDEWEB)

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin [Brookhaven National Labs., Upton, NY (United States)

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  19. Surface temperature excess in heterogeneous catalysis

    NARCIS (Netherlands)

    Zhu, L.

    2005-01-01

    In this dissertation we study the surface temperature excess in heterogeneous catalysis. For heterogeneous reactions, such as gas-solid catalytic reactions, the reactions take place at the interfaces between the two phases: the gas and the solid catalyst. Large amount of reaction heats are released

  20. Excessive Positivism in Person-Centered Planning

    Science.gov (United States)

    Holburn, Steve; Cea, Christine D.

    2007-01-01

    This paper illustrates the positivistic nature of person-centered planning (PCP) that is evident in the planning methods employed, the way that individuals with disabilities are described, and in portrayal of the outcomes of PCP. However, a confluence of factors can lead to manifestation of excessive positivism that does not serve PCP…

  1. Can Excess Bilirubin Levels Cause Learning Difficulties?

    Science.gov (United States)

    Pretorius, E.; Naude, H.; Becker, P. J.

    2002-01-01

    Examined learning problems in South African sample of 7- to 14-year-olds whose mothers reported excessively high infant bilirubin shortly after the child's birth. Found that this sample had lowered verbal ability with the majority also showing impaired short-term and long-term memory. Findings suggested that impaired formation of astrocytes…

  2. 30 CFR 56.6902 - Excessive temperatures.

    Science.gov (United States)

    2010-07-01

    ... Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Requirements § 56.6902 Excessive temperatures. (a) Where heat could cause premature detonation, explosive... the initiation of the blast to no more than 12 hours; and (3) Take other special precautions to...

  3. 30 CFR 57.6902 - Excessive temperatures.

    Science.gov (United States)

    2010-07-01

    ... Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Requirements-Surface and Underground § 57.6902 Excessive temperatures. (a) Where heat could cause premature... loading and the initiation of the blast to no more than 12 hours; and (3) Take other special precautions...

  4. Excessive infant crying : definitions determine risk groups

    NARCIS (Netherlands)

    Reijneveld, SA; Brugman, E; Hirasing, RA

    2002-01-01

    We assessed risk groups for excessive infant crying using 10 published definitions, in 3179 children aged 1-6 months (response: 96.5%). Risk groups regarding parental employment, living area, lifestyle, and obstetric history varied by definition. This may explain the existence of conflicting evidenc

  5. [Conservative and surgical treatment of convergence excess].

    Science.gov (United States)

    Ehrt, O

    2016-07-01

    Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.

  6. 7 CFR 955.44 - Excess funds.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... demands payment thereof, in which event such proportionate refund shall be paid. ...

  7. 7 CFR 945.44 - Excess funds.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess funds. 945.44 Section 945.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... following fiscal period, except that if the handler demands payment, such proportionate refund shall be paid...

  8. Surface temperature excess in heterogeneous catalysis

    NARCIS (Netherlands)

    Zhu, L.

    2005-01-01

    In this dissertation we study the surface temperature excess in heterogeneous catalysis. For heterogeneous reactions, such as gas-solid catalytic reactions, the reactions take place at the interfaces between the two phases: the gas and the solid catalyst. Large amount of reaction heats are released

  9. ORIGIN OF EXCESS (176)Hf IN METEORITES

    DEFF Research Database (Denmark)

    Thrane, Kristine; Connelly, James Norman; Bizzarro, Martin

    2010-01-01

    After considerable controversy regarding the (176)Lu decay constant (lambda(176)Lu), there is now widespread agreement that (1.867 +/- 0.008) x 10(-11) yr(-1) as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the (176)Hf excesses that are correlated with...

  10. Excessive nitrogen and phosphorus in European rivers

    NARCIS (Netherlands)

    Blaas, Harry; Kroeze, Carolien

    2016-01-01

    Rivers export nutrients to coastal waters. Excess nutrient export may result in harmful algal blooms and hypoxia, affecting biodiversity, fisheries, and recreation. The purpose of this study is to quantify for European rivers (1) the extent to which N and P loads exceed levels that minimize the r

  11. The NANOGrav Nine-Year Data Set: Excess Noise in Millisecond Pulsar Arrival Times

    CERN Document Server

    Lam, M T; Chatterjee, S; Arzoumanian, Z; Crowter, K; Demorest, P B; Dolch, T; Ellis, J A; Ferdman, R D; Fonseca, E; Gonzalez, M E; Jones, G; Jones, M L; Levin, L; Madison, D R; McLaughlin, M A; Nice, D J; Pennucci, T T; Ransom, S M; Shannon, R M; Siemens, X; Stairs, I H; Stovall, K; Swiggum, J K; Zhu, W W

    2016-01-01

    Gravitational wave astronomy using a pulsar timing array requires high-quality millisecond pulsars, correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for millisecond pulsars observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar and we demonstrate that the excess noise has a red power spectrum for 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and freq...

  12. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every catchment of...

  14. Does a pneumotach accurately characterize voice function?

    Science.gov (United States)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  15. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  16. Erasing errors due to alignment ambiguity when estimating positive selection.

    Science.gov (United States)

    Redelings, Benjamin

    2014-08-01

    Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments.

  17. ILLUSION OF EXCESSIVE CONSUMPTION AND ITS EFFECTS

    Directory of Open Access Journals (Sweden)

    MUNGIU-PUPĂZAN MARIANA CLAUDIA

    2015-12-01

    Full Text Available The aim is to explore, explain and describe this phenomenon to a better understanding of it and also the relationship between advertising and the consumer society members. This paper aims to present an analysis of excessive and unsustainable consumption, the evolution of a phenomenon, and the ability to find a way to combat. Unfortunately, studies show that this tendency to accumulate more than we need to consume excess means that almost all civilizations fined and placed dogmatic among the values that children learn early in life. This has been perpetuated since the time when the goods or products does not get so easy as today. Anti-consumerism has emerged in response to this economic system, not on the long term. We are witnessing the last two decades to establish a new phase of consumer capitalism: society hiperconsumtion.

  18. Excess mortality in giant cell arteritis

    DEFF Research Database (Denmark)

    Bisgård, C; Sloth, H; Keiding, Niels

    1991-01-01

    A 13-year departmental sample of 34 patients with definite (biopsy-verified) giant cell arteritis (GCA) was reviewed. The mortality of this material was compared to sex-, age- and time-specific death rates in the Danish population. The standardized mortality ratio (SMR) was 1.8 (95% confidence...... with respect to SMR, sex distribution or age. In the group of patients with department-diagnosed GCA (definite + probable = 180 patients), the 95% confidence interval for the SMR of the women included 1.0. In all other subgroups there was a significant excess mortality. Excess mortality has been found in two...... of seven previous studies on survival in GCA. The prevailing opinion that steroid-treated GCA does not affect the life expectancy of patients is probably not correct....

  19. Quark Seesaw Vectorlike Fermions and Diphoton Excess

    CERN Document Server

    Dev, P S Bhupal; Zhang, Yongchao

    2015-01-01

    We present a possible interpretation of the recent diphoton excess reported by the $\\sqrt s=13$ TeV LHC data in quark seesaw left-right models with vectorlike fermions proposed to solve the strong $CP$ problem without the axion. The gauge singlet real scalar field responsible for the mass of the vectorlike fermions has the right production cross section and diphoton branching ratio to be identifiable with the reported excess at around 750 GeV diphoton invariant mass. Various ways to test this hypothesis as more data accumulates at the LHC are proposed. In particular, we find that for our interpretation to work, there is an upper limit on the right-handed scale $v_R$, which depends on the Yukawa coupling of singlet Higgs field to the vectorlike fermions.

  20. Relationship Between Thermal Tides and Radius Excess

    CERN Document Server

    Socrates, Aristotle

    2013-01-01

    Close-in extrasolar gas giants -- the hot Jupiters -- display departures in radius above the zero-temperature solution, the radius excess, that are anomalously high. The radius excess of hot Jupiters follows a relatively close relation with thermal tidal tidal torques and holds for ~ 4-5 orders of magnitude in a characteristic thermal tidal power in such a way that is consistent with basic theoretical expectations. The relation suggests that thermal tidal torques determine the global thermodynamic and spin state of the hot Jupiters. On empirical grounds, it is shown that theories of hot Jupiter inflation that invoke a constant fraction of the stellar flux to be deposited at great depth are, essentially, falsified.

  1. Excessive Profits of German Defense Contractors

    Science.gov (United States)

    2014-09-01

    revealed similar patterns in both countries. The statistical evidence for excessive profitability is stronger for the measurements return on assets ...types of tangible and intangible things that have an economic value, and there are current assets that can be transformed into cash within a short period...The Rate of Return on Assets (ROA) measures a firm’s performance in using assets to generate net income” (Stickney et al., 2010, p. 245). In

  2. Armodafinil in the treatment of excessive sleepiness

    OpenAIRE

    Rosenberg, Russell

    2010-01-01

    Russell Rosenberg1, Richard Bogan21NeuroTrials Research, Atlanta, Georgia, USA; 2SleepMed of South Carolina, Columbia, South Carolina, USAAbstract: Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral...

  3. Armodafinil in the treatment of excessive sleepiness

    OpenAIRE

    Rosenberg, Russell; Bogan, Richard

    2010-01-01

    Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/ wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral measures, mechanical devices, and pharmacologic agents. This review explores the evidence supporting the use of armodafinil to treat ES associated...

  4. Contrast induced hyperthyroidism due to iodine excess

    OpenAIRE

    Mushtaq, Usman; Price, Timothy; Laddipeerla, Narsing; Townsend, Amanda; Broadbridge, Vy

    2009-01-01

    Iodine induced hyperthyroidism is a thyrotoxic condition caused by exposure to excessive iodine. Historically this type of hyperthyroidism has been described in areas of iodine deficiency. With advances in medicine, iodine induced hyperthyroidism has been observed following the use of drugs containing iodine—for example, amiodarone, and contrast agents used in radiological imaging. In elderly patients it is frequently difficult to diagnose and control contrast related hyperthyroidism, as most...

  5. Identifying excessive credit growth and leverage

    OpenAIRE

    Alessi, Lucia; Detken, Carsten

    2014-01-01

    This paper aims at providing policymakers with a set of early warning indicators helpful in guiding decisions on when to activate macroprudential tools targeting excessive credit growth and leverage. To robustly select the key indicators we apply the “Random Forest” method, which bootstraps and aggregates a multitude of decision trees. On these identified key indicators we grow a binary classification tree which derives the associated optimal early warning thresholds. By using credit to GDP g...

  6. Phospholipids as Biomarkers for Excessive Alcohol Use

    Science.gov (United States)

    2017-03-01

    Sci Rep 2014;4:3725. Summary of the overall projects during the funding period 1) Provide supporting data regarding the primary goal of this project...the sphingomyelin and lysophosphatidylcholine as markers) : The support from the DOD has allowed us to further expand what we previously proposed...in the grant application into study the whole metabolomics profiles in human serum with excessive 7 alcohol use. Further, we also expand our study

  7. Are Predictive Energy Expenditure Equations in Ventilated Surgery Patients Accurate?

    Science.gov (United States)

    Tignanelli, Christopher J; Andrews, Allan G; Sieloff, Kurt M; Pleva, Melissa R; Reichert, Heidi A; Wooley, Jennifer A; Napolitano, Lena M; Cherry-Bukowiec, Jill R

    2017-01-01

    While indirect calorimetry (IC) is the gold standard used to calculate specific calorie needs in the critically ill, predictive equations are frequently utilized at many institutions for various reasons. Prior studies suggest these equations frequently misjudge actual resting energy expenditure (REE) in medical and mixed intensive care unit (ICU) patients; however, their utility for surgical ICU (SICU) patients has not been fully evaluated. Therefore, the objective of this study was to compare the REE measured by IC with REE calculated using specific calorie goals or predictive equations for nutritional support in ventilated adult SICU patients. A retrospective review of prospectively collected data was performed on all adults (n = 419, 18-91 years) mechanically ventilated for >24 hours, with an Fio2 ≤ 60%, who met IC screening criteria. Caloric needs were estimated using Harris-Benedict equations (HBEs), and 20, 25, and 30 kcal/kg/d with actual (ABW), adjusted (ADJ), and ideal body (IBW) weights. The REE was measured using IC. The estimated REE was considered accurate when within ±10% of the measured REE by IC. The HBE, 20, 25, and 30 kcal/kg/d estimates of REE were found to be inaccurate regardless of age, gender, or weight. The HBE and 20 kcal/kg/d underestimated REE, while 25 and 30 kcal/kg/d overestimated REE. Of the methods studied, those found to most often accurately estimate REE were the HBE using ABW, which was accurate 35% of the time, and 25 kcal/kg/d ADJ, which was accurate 34% of the time. This difference was not statistically significant. Using HBE, 20, 25, or 30 kcal/kg/d to estimate daily caloric requirements in critically ill surgical patients is inaccurate compared to REE measured by IC. In SICU patients with nutrition requirements essential to recovery, IC measurement should be performed to guide clinicians in determining goal caloric requirements.

  8. [Adaptation of thyroid function to excess iodine].

    Science.gov (United States)

    Aurengo, Andre; Leenhardt, Laurence; Aurengo, Helyett

    2002-10-26

    NORMALLY: The production of thyroid hormones is normally stable, despite iodine supplies that may vary widely and even on sudden excess iodine. The metabolism of iodine is characterised by adapted thyroid uptake, the requirements varying on the age and physiological status of the individual (pregnancy, breastfeeding) and by insufficient supplies in several areas in France. IN THE CASE OF EXCESS: The mechanisms that permit the thyroid to adapt to a sudden or chronic excess of iodine are immature in the newborn and sometimes deficient in adults, and may lead to iodine-induced dysthyroidism. Thanks to the recent progress made in thyroid physiology, these mechanisms are now better known. PATHOLOGICAL IMPACT: Iodine-induced hyperthyroidisms in a healthy or pathological thyroid are frequent. They are predominantly related to amiodarone. Iodine-related hypothyroidism frequently appears in cases of pre-existing thyroid diseases (asymptomatic autoimmune thyroiditis, for example). They are frequent in the newborn, notably in the premature. The iodine prophylaxis organised in Poland following the Tchernobyl accident led to very few pathological consequences in adults or children.

  9. Earnings Quality Measures and Excess Returns.

    Science.gov (United States)

    Perotti, Pietro; Wagenhofer, Alfred

    2014-06-01

    This paper examines how commonly used earnings quality measures fulfill a key objective of financial reporting, i.e., improving decision usefulness for investors. We propose a stock-price-based measure for assessing the quality of earnings quality measures. We predict that firms with higher earnings quality will be less mispriced than other firms. Mispricing is measured by the difference of the mean absolute excess returns of portfolios formed on high and low values of a measure. We examine persistence, predictability, two measures of smoothness, abnormal accruals, accruals quality, earnings response coefficient and value relevance. For a large sample of US non-financial firms over the period 1988-2007, we show that all measures except for smoothness are negatively associated with absolute excess returns, suggesting that smoothness is generally a favorable attribute of earnings. Accruals measures generate the largest spread in absolute excess returns, followed by smoothness and market-based measures. These results lend support to the widespread use of accruals measures as overall measures of earnings quality in the literature.

  10. The entropy excess and moment of inertia excess ratio with inclusion of statistical pairing fluctuations

    Science.gov (United States)

    Razavi, R.; Dehghani, V.

    2014-03-01

    The entropy excess of 163Dy compared to 162Dy as a function of nuclear temperature have been investigated using the mean value Bardeen-Cooper-Schrieffer (BCS) method based on application of the isothermal probability distribution function to take into account the statistical fluctuations. Then, the spin cut-off excess ratio (moment of inertia excess ratio) introduced by Razavi [Phys. Rev. C88 (2013) 014316] for proton and neutron system have been obtained and are compared with their corresponding data on the BCS model. The results show that the overall agreement between the BCS model and mean value BCS method is satisfactory and the mean value BCS model reduces fluctuations and washes out singularities. However, the expected constant value in the entropy excess is not reproduced by the mean value BCS method.

  11. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions

    OpenAIRE

    Edward Khawam; Bachir Abiad; Alaa Boughannam; Joanna Saade; Ramzi Alameddine

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies ...

  12. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Directory of Open Access Journals (Sweden)

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  13. Partitioning of excess mortality in population-based cancer patient survival studies using flexible parametric survival models

    Directory of Open Access Journals (Sweden)

    Eloranta Sandra

    2012-06-01

    Full Text Available Abstract Background Relative survival is commonly used for studying survival of cancer patients as it captures both the direct and indirect contribution of a cancer diagnosis on mortality by comparing the observed survival of the patients to the expected survival in a comparable cancer-free population. However, existing methods do not allow estimation of the impact of isolated conditions (e.g., excess cardiovascular mortality on the total excess mortality. For this purpose we extend flexible parametric survival models for relative survival, which use restricted cubic splines for the baseline cumulative excess hazard and for any time-dependent effects. Methods In the extended model we partition the excess mortality associated with a diagnosis of cancer through estimating a separate baseline excess hazard function for the outcomes under investigation. This is done by incorporating mutually exclusive background mortality rates, stratified by the underlying causes of death reported in the Swedish population, and by introducing cause of death as a time-dependent effect in the extended model. This approach thereby enables modeling of temporal trends in e.g., excess cardiovascular mortality and remaining cancer excess mortality simultaneously. Furthermore, we illustrate how the results from the proposed model can be used to derive crude probabilities of death due to the component parts, i.e., probabilities estimated in the presence of competing causes of death. Results The method is illustrated with examples where the total excess mortality experienced by patients diagnosed with breast cancer is partitioned into excess cardiovascular mortality and remaining cancer excess mortality. Conclusions The proposed method can be used to simultaneously study disease patterns and temporal trends for various causes of cancer-consequent deaths. Such information should be of interest for patients and clinicians as one way of improving prognosis after cancer is

  14. Should Excessive Worry Be Required for a Diagnosis of Generalized Anxiety Disorder? Results from the US National Comorbidity Survey Replication

    Science.gov (United States)

    Ruscio, Ayelet Meron; Lane, Michael; Roy-Byrne, Peter; Stang, Paul E.; Stein, Dan J.; Wittchen, Hans-Ulrich; Kessler, Ronald C.

    2007-01-01

    Background Excessive worry is required by DSM-IV, but not ICD-10, for a diagnosis of generalized anxiety disorder (GAD). No large-scale epidemiological study has ever examined the implications of this requirement for estimates of prevalence, severity, or correlates of GAD. Methods Data were analyzed from the US National Comorbidity Survey Replication, a nationally representative, face-to-face survey of adults in the US household population that was fielded in 2001–2003. DSM-IV GAD was assessed with Version 3.0 of the WHO Composite International Diagnostic Interview. Non-excessive worriers meeting all other DSM-IV criteria for GAD were compared with respondents who met full GAD criteria as well as with other survey respondents to consider the implications of removing the excessiveness requirement. Results The estimated lifetime prevalence of GAD increases by approximately 40% when the excessiveness requirement is removed. Excessive GAD begins earlier in life, has a more chronic course, and is associated with greater symptom severity and psychiatric comorbidity than non-excessive GAD. However, non-excessive cases nonetheless evidence substantial persistence and impairment of GAD as well as significantly elevated comorbidity compared to respondents without GAD. Non-excessive cases also have socio-demographic characteristics and familial aggregation of GAD comparable to excessive cases. Conclusions Although individuals who meet all criteria for GAD other than excessiveness have a somewhat milder presentation than those with excessive worry, their syndromes are sufficiently similar to those with excessive worry to warrant a GAD diagnosis. PMID:16300690

  15. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  16. Excess mid-IR emission in Cataclysmic Variables

    CERN Document Server

    Dubus, G; Kern, B; Taam, R E; Spruit, H C

    2004-01-01

    We present a search for excess mid-IR emission due to circumbinary material in the orbital plane of cataclysmic variables (CVs). Our motivation stems from the fact that the strong braking exerted by a circumbinary (CB) disc on the binary system could explain several puzzles in our current understanding of CV evolution. Since theoretical estimates predict that the emission from a CB disc can dominate the spectral energy distribution (SED) of the system at wavelengths > 5 microns, we obtained simultaneous visible to mid-IR SEDs for eight systems. We report detections of SS Cyg at 11.7 microns and AE Aqr at 17.6 microns, both in excess of the contribution from the secondary star. In AE Aqr, the IR likely originates from synchrotron-emitting clouds propelled by the white dwarf. In SS Cyg, we argue that the observed mid-IR variability is difficult to reconcile with simple models of CB discs and we consider free-free emission from a wind. In the other systems, our mid-IR upper limits place strong constraints on the...

  17. Excess mortality among the unmarried: a case study of Japan.

    Science.gov (United States)

    Goldman, N; Hu, Y

    1993-02-01

    Recent research has demonstrated that mortality patterns by marital status in Japan are different from corresponding patterns in other industrialized countries. Most notably, the magnitude of the excess mortality experienced by single Japanese has been staggering. For example, estimates of life expectancy for the mid-1900s indicate that single Japanese men and women had life expectancies between 15 and 20 years lower than their married counterparts. In addition, gender differences among single Japanese have been smaller than elsewhere, while those among divorced persons have been unanticipatedly large; and, the excess mortality of the Japanese single population has been decreasing over the past few decades in contrast to generally increasing differentials elsewhere. In this paper, we use a variety of data sources to explore several explanations for these unique mortality patterns in Japan. Undeniably, the traditional Japanese system of arranged marriages makes the process of selecting a spouse a significant factor. Evidence from anthropological studies and attitudinal surveys indicates that marriage is likely to have been and probably continues to be more selective with regard to underlying health characteristics in Japan than in other industrialized countries. However, causal explanations related to the importance of marriage and the family in Japanese society may also be responsible for the relatively high mortality experienced by singles and by divorced men.

  18. Changing guards: time to move beyond Body Mass Index for population monitoring of excess adiposity

    OpenAIRE

    Tanamas, Stephanie K.; Lean, Michael E. J.; Combet, Emilie; Vlassopoulos, Antonios; Zimmet, Paul Z.; Peeters, Anna

    2016-01-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorisation...

  19. CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis

    CERN Document Server

    Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,

    2011-01-01

    We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...

  20. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Science.gov (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  1. A multiple more accurate Hardy-Littlewood-Polya inequality

    Directory of Open Access Journals (Sweden)

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  2. A fast and accurate FPGA based QRS detection system.

    Science.gov (United States)

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  3. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven

    2012-07-01

    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  4. Important Nearby Galaxies without Accurate Distances

    Science.gov (United States)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  5. Subcorneal hematomas in excessive video game play.

    Science.gov (United States)

    Lennox, Maria; Rizzo, Jason; Lennox, Luke; Rothman, Ilene

    2016-01-01

    We report a case of subcorneal hematomas caused by excessive video game play in a 19-year-old man. The hematomas occurred in a setting of thrombocytopenia secondary to induction chemotherapy for acute myeloid leukemia. It was concluded that thrombocytopenia subsequent to prior friction from heavy use of a video game controller allowed for traumatic subcorneal hemorrhage of the hands. Using our case as a springboard, we summarize other reports with video game associated pathologies in the medical literature. Overall, cognizance of the popularity of video games and related pathologies can be an asset for dermatologists who evaluate pediatric patients.

  6. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Science.gov (United States)

    Ku, Kevin; Sue, Gloria R.

    2015-01-01

    In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol. PMID:26904700

  7. Simple laboratory determination of excess oligosacchariduria.

    Science.gov (United States)

    Sewell, A C

    1981-02-01

    I describe a simple set of procedures for the screening of patients' urine to detect oligosaccharide-storage diseases. Urines from patients with mucolipidosis I, mannosidosis, fucosidosis, aspartylglycosaminuria, and type VI glycogen-storage disease can be distinguished by thin-layer chromatography. Patients with beta-galactosidase deficiency can be detected by use of a combination of ion-exchange and thin-layer chromatography. Excess sialyloligosaccharide excretion is detected by using gel filtration and a quantitative assay for neuraminic acid. The advantages of the system are detection of virtually all known disorders in which oligosaccharides are over-excreted, production of characteristic patterns, and small sample requirement.

  8. Excessive Internet use: implications for sexual behavior

    OpenAIRE

    Griffiths, M

    2000-01-01

    The Internet appears to have become an ever-increasing part in many areas of people’s day-to- day lives. One area that deserves further examination surrounds sexual behavior and excessive Internet usage. It has been alleged by some academics that social pathologies are beginning to surface in cyberspace and have been referred to as “technological addictions.” Such research may have implications and insights into sexuality and sexual behavior. Therefore, this article examines the concept of “I...

  9. Excessive prices as abuse of dominance?

    DEFF Research Database (Denmark)

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    In previous research, we found that the sole Danish producer of cement holds a dominant position in the Danish market for (grey) cement. We are able to identify an inelastic long-run demand relation that would seem to permit the exercise of market power. We aim to establish whether the dominant...... firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  10. [Excessive spending by misuse of clinical laboratory].

    Science.gov (United States)

    Benítez-Arvizu, Gamaliel; Novelo-Garza, Bárbara; Mendoza-Valdez, Antonia Lorena; Galván-Cervantes, Jorge; Morales-Rojas, Alejandro

    2016-01-01

    Seventy five percent or more of a diagnosis comes from a proper medical history along with an excellent physical examination. This leaves to the clinical laboratory the function of supporting the findings, determining prognosis, classifying the diseases, monitoring the diseases and, in the minimum of cases, establishing the diagnosis. In recent years there has been a global phenomenon in which the allocation of resources to health care has grown in an excessive way; the Instituto Mexicano del Seguro Social is not an exception with an increase of 29 % from 2009 to 2011; therefore, it is necessary to set containment and reduction without compromising the quality of patient care.

  11. The Excess Radio Background and Fast Radio Transients

    CERN Document Server

    Kehayias, John; Weiler, Thomas J

    2015-01-01

    In the last few years ARCADE 2, combined with older experiments, has detected an additional radio background, measured as a temperature and ranging in frequency from 22 MHz to 10 GHz, not accounted for by known radio sources and the cosmic microwave background. One type of source which has not been considered in the radio background is that of fast transients (those with event times much less than the observing time). We present a simple estimate, and a more detailed calculation, for the contribution of radio transients to the diffuse background. As a timely example, we estimate the contribution from the recently-discovered fast radio bursts (FRBs). Although their contribution is likely 6 or 7 orders of magnitude too small (though there are large uncertainties in FRB parameters) to account for the ARCADE~2 excess, our development is general and so can be applied to any fast transient sources, discovered or yet to be discovered. We estimate parameter values necessary for transient sources to noticeably contrib...

  12. Connecting the LHC diphoton excess to the Galatic center gamma-ray excess

    CERN Document Server

    Huang, Xian-Jun; Zhou, Yu-Feng

    2016-01-01

    The recent LHC Run-2 data have shown a possible excess in diphoton events, suggesting the existence of a new resonance $\\phi$ with mass $M\\sim 750$~GeV. If $\\phi$ plays the role of a portal particle connecting the Standard Model and the invisible dark sector, the diphoton excess should be correlated with another photon excess, namely, the excess in the diffuse gamma rays towards the Galactic center, which can be interpreted by the annihilation of dark matter(DM). We investigate the necessary conditions for a consistent explanation for the two photon excesses, especially the requirement on the width-to-mass ratio $\\Gamma/M$ and $\\phi$ decay channels, in a collection of DM models where the DM particle can be scalar, fermionionic and vector, and $\\phi$ can be generated through $s$-channel $gg$ fusion or $q\\bar q$ annihilation. We show that the minimally required $\\Gamma/M$ is determined by a single parameter proportional to $(m_{\\chi}/M)^{n}$, where the integer $n$ depends on the nature of the DM particle. We fi...

  13. Real-time total system error estimation:Modeling and application in required navigation performance

    Institute of Scientific and Technical Information of China (English)

    Fu Li; Zhang Jun; Li Rui

    2014-01-01

    In required navigation performance (RNP), total system error (TSE) is estimated to pro-vide a timely warning in the presence of an excessive error. In this paper, by analyzing the under-lying formation mechanism, the TSE estimation is modeled as the estimation fusion of a fixed bias and a Gaussian random variable. To address the challenge of high computational load induced by the accurate numerical method, two efficient methods are proposed for real-time application, which are called the circle tangent ellipse method (CTEM) and the line tangent ellipse method (LTEM), respectively. Compared with the accurate numerical method and the traditional scalar quantity summation method (SQSM), the computational load and accuracy of these four methods are exten-sively analyzed. The theoretical and experimental results both show that the computing time of the LTEM is approximately equal to that of the SQSM, while it is only about 1/30 and 1/6 of that of the numerical method and the CTEM. Moreover, the estimation result of the LTEM is parallel with that of the numerical method, but is more accurate than those of the SQSM and the CTEM. It is illustrated that the LTEM is quite appropriate for real-time TSE estimation in RNP application.

  14. Z-peaked excess in goldstini scenarios

    CERN Document Server

    Liew, Seng Pei; Mawatari, Kentarou; Sakurai, Kazuki; Vereecken, Matthias

    2015-01-01

    We study a possible explanation of a 3.0 $\\sigma$ excess recently reported by the ATLAS Collaboration in events with Z-peaked same-flavour opposite-sign lepton pair, jets and large missing transverse momentum in the context of gauge-mediated SUSY breaking with more than one hidden sector, the so-called goldstini scenario. In a certain parameter space, the gluino two-body decay chain $\\tilde g\\to g\\tilde\\chi^0_{1,2}\\to gZ\\tilde G'$ becomes dominant, where $\\tilde\\chi^0_{1,2}$ and $\\tilde G'$ are the Higgsino-like neutralino and the massive pseudo-goldstino, respectively, and gluino pair production can contribute to the signal. We find that a mass spectrum such as $m_{\\tilde g}\\sim 900$ GeV, $m_{\\tilde\\chi^0_{1,2}}\\sim 700$ GeV and $m_{\\tilde G'}\\sim 600$ GeV demonstrates the rate and the distributions of the excess, without conflicting with the stringent constraints from jets plus missing energy analyses and with the CMS constraint on the identical final state.

  15. Z-peaked excess in goldstini scenarios

    Directory of Open Access Journals (Sweden)

    Seng Pei Liew

    2015-11-01

    Full Text Available We study a possible explanation of a 3.0 σ excess recently reported by the ATLAS Collaboration in events with Z-peaked same-flavour opposite-sign lepton pair, jets and large missing transverse momentum in the context of gauge-mediated SUSY breaking with more than one hidden sector, the so-called goldstini scenario. In a certain parameter space, the gluino two-body decay chain g˜→gχ˜1,20→gZG˜′ becomes dominant, where χ˜1,20 and G˜′ are the Higgsino-like neutralino and the massive pseudo-goldstino, respectively, and gluino pair production can contribute to the signal. We find that a mass spectrum such as mg˜∼1000 GeV, mχ˜1,20∼800 GeV and mG˜′∼600 GeV demonstrates the rate and the distributions of the excess, without conflicting with the stringent constraints from jets plus missing energy analyses and with the CMS constraint on the identical final state.

  16. Mapping interfacial excess in atom probe data

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, Peter, E-mail: peter.felfer@sydney.edu.au [School of Aerospace Mechanical and Mechatronic Engineering, The University of Sydney (Australia); Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia); Scherrer, Barbara [Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia); Eidgenossische Technische Hochschule Zürich (Switzerland); Demeulemeester, Jelle [Imec vzw, Kapeldreef 75, Heverlee 3001 (Belgium); Vandervorst, Wilfried [Imec vzw, Kapeldreef 75, Heverlee 3001 (Belgium); Instituut voor Kern- en Stralingsfysica, KU Leuven, Celestijnenlaan 200D, B-3001 Leuven (Belgium); Cairney, Julie M. [School of Aerospace Mechanical and Mechatronic Engineering, The University of Sydney (Australia); Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia)

    2015-12-15

    Using modern wide-angle atom probes, it is possible to acquire atomic scale 3D data containing 1000 s of nm{sup 2} of interfaces. It is therefore possible to probe the distribution of segregated species across these interfaces. Here, we present techniques that allow the production of models for interfacial excess (IE) mapping and discuss the underlying considerations and sampling statistics. We also show, how the same principles can be used to achieve thickness mapping of thin films. We demonstrate the effectiveness on example applications, including the analysis of segregation to a phase boundary in stainless steel, segregation to a metal–ceramic interface and the assessment of thickness variations of the gate oxide in a fin-FET. - Highlights: • Using computational geometry, interfacial excess can be mapped for various features in APT. • Suitable analysis models can be created by combining manual modelling and mesh generation algorithms. • Thin film thickness can be mapped with high accuracy using this technique.

  17. Surgery for residual convergence excess esotropia.

    Science.gov (United States)

    Patel, Himanshu I; Dawson, Emma; Lee, John

    2011-12-01

    The outcome of bilateral medial rectus posterior fixation sutures +/- central tenotomy was assessed as a secondary procedure for residual convergence excess esotropia in 11 patients. Ten had previously undergone bilateral medial rectus recessions. One had recess/resect surgery on the deviating eye. The average preoperative near angle was 30 prism diopters with a range of 16 to 45 prism diopters. Eight patients underwent bilateral medial rectus posterior fixation sutures with central tenotomy. Two had bilateral medial rectus posterior fixation sutures only, and one had bilateral medial rectus posterior fixation suture, a lateral rectus resection, and an inferior oblique disinsertion. The postoperative near angle ranged from 4-30 prism diopters, with mean of 12 prism diopters. Five patients demonstrated some stereopsis preoperatively, all needing bifocals. Postoperatively, nine patients demonstrated an improvement in stereopsis, none needing bifocals. Two showed smaller near angles and better control without bifocals. Final stereopsis ranged from 30 seconds of arc to 800 seconds of arc. We feel that bilateral medial rectus posterior fixation sutures with or without central tenotomy is a viable secondary procedure for residual convergence excess esotropia.

  18. Moderate excess of pyruvate augments osteoclastogenesis

    Directory of Open Access Journals (Sweden)

    Jenna E. Fong

    2013-03-01

    Cell differentiation leads to adaptive changes in energy metabolism. Conversely, hyperglycemia induces malfunction of many body systems, including bone, suggesting that energy metabolism reciprocally affects cell differentiation. We investigated how the differentiation of bone-resorbing osteoclasts, large polykaryons formed through fusion and growth of cells of monocytic origin, is affected by excess of energy substrate pyruvate and how energy metabolism changes during osteoclast differentiation. Surprisingly, small increases in pyruvate (1–2 mM above basal levels augmented osteoclastogenesis in vitro and in vivo, while larger increases were not effective in vitro. Osteoclast differentiation increased cell mitochondrial activity and ATP levels, which were further augmented in energy-rich conditions. Conversely, the inhibition of respiration significantly reduced osteoclast number and size. AMP-activated protein kinase (AMPK acts as a metabolic sensor, which is inhibited in energy-rich conditions. We found that osteoclast differentiation was associated with an increase in AMPK levels and a change in AMPK isoform composition. Increased osteoclast size induced by pyruvate (1 mM above basal levels was prevented in the presence of AMPK activator 5-amino-4-imidazole carboxamide ribonucleotide (AICAR. In keeping, inhibition of AMPK using dorsomorphin or siRNA to AMPKγ increased osteoclast size in control cultures to the level observed in the presence of pyruvate. Thus, we have found that a moderate excess of pyruvate enhances osteoclastogenesis, and that AMPK acts to tailor osteoclastogenesis to a cell's bioenergetics capacity.

  19. Extragalactic Gamma Ray Excess from Coma Supercluster Direction

    Indian Academy of Sciences (India)

    Pantea Davoudifar; S. Jalil Fatemi

    2011-09-01

    The origin of extragalactic diffuse gamma ray is not accurately known, especially because our suggestions are related to many models that need to be considered either to compute the galactic diffuse gamma ray intensity or to consider the contribution of other extragalactic structures while surveying a specific portion of the sky. More precise analysis of EGRET data however, makes it possible to estimate the diffuse gamma ray in Coma supercluster (i.e., Coma\\A1367 supercluster) direction with a value of ( > 30MeV) ≃ 1.9 × 10-6 cm-2 s-1, which is considered to be an upper limit for the diffuse gamma ray due to Coma supercluster. The related total intensity (on average) is calculated to be ∼ 5% of the actual diffuse extragalactic background. The calculated intensity makes it possible to estimate the origin of extragalactic diffuse gamma ray.

  20. Molecular simulation of excess isotherm and excess enthalpy change in gas-phase adsorption.

    Science.gov (United States)

    Do, D D; Do, H D; Nicholson, D

    2009-01-29

    We present a new approach to calculating excess isotherm and differential enthalpy of adsorption on surfaces or in confined spaces by the Monte Carlo molecular simulation method. The approach is very general and, most importantly, is unambiguous in its application to any configuration of solid structure (crystalline, graphite layer or disordered porous glass), to any type of fluid (simple or complex molecule), and to any operating conditions (subcritical or supercritical). The behavior of the adsorbed phase is studied using the partial molar energy of the simulation box. However, to characterize adsorption for comparison with experimental data, the isotherm is best described by the excess amount, and the enthalpy of adsorption is defined as the change in the total enthalpy of the simulation box with the change in the excess amount, keeping the total number (gas + adsorbed phases) constant. The excess quantities (capacity and energy) require a choice of a reference gaseous phase, which is defined as the adsorptive gas phase occupying the accessible volume and having a density equal to the bulk gas density. The accessible volume is defined as the mean volume space accessible to the center of mass of the adsorbate under consideration. With this choice, the excess isotherm passes through a maximum but always remains positive. This is in stark contrast to the literature where helium void volume is used (which is always greater than the accessible volume) and the resulting excess can be negative. Our definition of enthalpy change is equivalent to the difference between the partial molar enthalpy of the gas phase and the partial molar enthalpy of the adsorbed phase. There is no need to assume ideal gas or negligible molar volume of the adsorbed phase as is traditionally done in the literature. We illustrate this new approach with adsorption of argon, nitrogen, and carbon dioxide under subcritical and supercritical conditions.

  1. Short inter-pregnancy intervals, parity, excessive pregnancy weight gain and risk of maternal obesity.

    Science.gov (United States)

    Davis, Esa M; Babineau, Denise C; Wang, Xuelei; Zyzanski, Stephen; Abrams, Barbara; Bodnar, Lisa M; Horwitz, Ralph I

    2014-04-01

    To investigate the relationship among parity, length of the inter-pregnancy intervals and excessive pregnancy weight gain in the first pregnancy and the risk of obesity. Using a prospective cohort study of 3,422 non-obese, non-pregnant US women aged 14-22 years at baseline, adjusted Cox models were used to estimate the association among parity, inter-pregnancy intervals, and excessive pregnancy weight gain in the first pregnancy and the relative hazard rate (HR) of obesity. Compared to nulliparous women, primiparous women with excessive pregnancy weight gain in the first pregnancy had a HR of obesity of 1.79 (95% CI 1.40, 2.29); no significant difference was seen between primiparous without excessive pregnancy weight gain in the first pregnancy and nulliparous women. Among women with the same pregnancy weight gain in the first pregnancy and the same number of inter-pregnancy intervals (12 and 18 months or ≥18 months), the HR of obesity increased 2.43-fold (95% CI 1.21, 4.89; p = 0.01) for every additional inter-pregnancy interval of pregnancy intervals. Among women with the same parity and inter-pregnancy interval pattern, women with excessive pregnancy weight gain in the first pregnancy had an HR of obesity 2.41 times higher (95% CI 1.81, 3.21; p obesity risk unless the primiparous women had excessive pregnancy weight gain in the first pregnancy, then their risk of obesity was greater. Multiparous women with the same excessive pregnancy weight gain in the first pregnancy and at least one additional short inter-pregnancy interval had a significant risk of obesity after childbirth. Perinatal interventions that prevent excessive pregnancy weight gain in the first pregnancy or lengthen the inter-pregnancy interval are necessary for reducing maternal obesity.

  2. Evaluation of effective dose and excess lifetime cancer risk from ...

    African Journals Online (AJOL)

    Evaluation of effective dose and excess lifetime cancer risk from indoor and outdoor gamma dose rate of university of Port Harcourt Teaching Hospital, ... In addition, the excess lifetime cancer risk (ELCR) calculated for indoor exposure ranges ...

  3. Excessive Alcohol Use and Risks to Women's Health

    Science.gov (United States)

    ... Spectrum Disorders (FASDs) Impaired Driving Fact Sheets - Excessive Alcohol Use and Risks to Women's Health Recommend on Facebook Tweet Share Compartir Excessive Alcohol Use and Risks to Women’s Health Although men ...

  4. Excessive Alcohol Use and Risks to Men's Health

    Science.gov (United States)

    ... Spectrum Disorders (FASDs) Impaired Driving Fact Sheets - Excessive Alcohol Use and Risks to Men's Health Recommend on Facebook Tweet Share Compartir Excessive Alcohol Use and Risks to Men's Health Men are ...

  5. Cool WISPs for stellar cooling excesses

    Energy Technology Data Exchange (ETDEWEB)

    Giannotti, Maurizio [Barry Univ., Miami Shores, FL (United States). Physical Sciences; Irastorza, Igor [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Redondo, Javier [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Max-Planck-Institut fuer Physik, Muenchen (Germany); Ringwald, Andreas [DESY Hamburg (Germany). Theory Group

    2015-12-15

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  6. Spectrophotometric Study of Galaxies with UV Excess

    Science.gov (United States)

    Kazarian, M. A.; Karapetian, E. L.

    2004-01-01

    Results from a spectrophotometric study of 21 galaxies with UV excess are presented. The half widths (FWHM) and equivalent widths of observed spectrum lines of these galaxies, as well as the relative intensities of the emission lines observed in the spectrum of the galaxy Kaz243, are determined. It is conjectured that the latter galaxy has the properties of an Sy2 type galaxy. The electron densities and masses of the gaseous components are found for 15 galaxies, along with the masses of 8 galaxies for which the ratio M/L has been calculated. It is shown that the spectral structures of these galaxies do not depend on whether they are members of physical systems or are isolated.

  7. New galaxies with ultraviolet excess. I

    Energy Technology Data Exchange (ETDEWEB)

    Kazarian, M.A.

    1979-07-01

    A list is given of 136 galaxies with ultraviolet excess found with the 40-in. Schmidt telescope of the Byurakan Observatory with a 1.5-deg objective prism. Of these, 58 were observed at the primary focus of the 2.6-m telescope of the Byurakan Observatory, and 12 at the primary focus of the 6-m telescope of the Special Astronomical Observatory of the USSR Academy of Sciences. These observations and Palomar Sky Survey prints were used for a morphological description of the galaxies. Descriptions are given of the spectra of 17 galaxies obtained with the 6-m telescope of the Special Astronomical Observatory, the 2.6-m telescope of the Byurakan Observatory, and 90-, 107-, and 200-in. telescopes in the United States.

  8. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Directory of Open Access Journals (Sweden)

    Courtney A. Cunningham MD

    2015-09-01

    Full Text Available In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol.

  9. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2013-08-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal time scales. There is no convincing evidence that RH might be less important for long-term palaeoclimatic d changes compared to moisture source temperature variations. Ice core d data may thus have to be reinterpreted, focusing on climatic influences on relative humidity during evaporation, in particular related to atmospheric circulation changes.

  10. On dilatons and the LHC diphoton excess

    Science.gov (United States)

    Megías, Eugenio; Pujolàs, Oriol; Quirós, Mariano

    2016-05-01

    We study soft wall models that can embed the Standard Model and a naturally light dilaton. Exploiting the full capabilities of these models we identify the parameter space that allows to pass Electroweak Precision Tests with a moderate Kaluza-Klein scale, around 2 TeV. We analyze the coupling of the dilaton with Standard Model (SM) fields in the bulk, and discuss two applications: i) Models with a light dilaton as the first particle beyond the SM pass quite easily all observational tests even with a dilaton lighter than the Higgs. However the possibility of a 125 GeV dilaton as a Higgs impostor is essentially disfavored; ii) We show how to extend the soft wall models to realize a 750 GeV dilaton that could explain the recently reported diphoton excess at the LHC.

  11. Cool WISPs for stellar cooling excesses

    CERN Document Server

    Giannotti, Maurizio; Redondo, Javier; Ringwald, Andreas

    2015-01-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  12. Di-photon excess illuminates dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Backović, Mihailo [Center for Cosmology, Particle Physics and Phenomenology - CP3,Universite Catholique de Louvain, Louvain-la-neuve (Belgium); Mariotti, Alberto [Theoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel,Pleinlaan 2, B-1050 Brussels (Belgium); International Solvay Institutes,Pleinlaan 2, B-1050 Brussels (Belgium); Redigolo, Diego [Laboratoire de Physique Théorique et Hautes Energies, CNRS UMR 7589,Universiteé Pierre et Marie Curie, 4 place Jussieu, F-75005, Paris (France)

    2016-03-22

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predicts a signal consistent with ∼300 GeV dark matter particle and ∼750 GeV scalar mediator in channels with large missing energy. This prediction is not yet severely bounded by LHC Run I searches and will be accessible at the LHC Run II in the jet plus missing energy channel with more luminosity. Our analysis also considers astro-physical constraints, pointing out that future direct detection experiments will be sensitive to this scenario.

  13. Excess compressibility in binary liquid mixtures.

    Science.gov (United States)

    Aliotta, F; Gapiński, J; Pochylski, M; Ponterio, R C; Saija, F; Salvato, G

    2007-06-14

    Brillouin scattering experiments have been carried out on some mixtures of molecular liquids. From the measurement of the hypersonic velocities we have evaluated the adiabatic compressibility as a function of the volume fraction. We show how the quadratic form of the excess compressibility dependence on the solute volume fraction can be derived by simple statistical effects and does not imply any interaction among the components of the system other than excluded volume effects. This idea is supported by the comparison of the experimental results with a well-established prototype model, consisting of a binary mixture of hard spheres with a nonadditive interaction potential. This naive model turns out to be able to produce a very wide spectrum of structural and thermodynamic features depending on values of its parameters. An attempt has made to understand what kind of structural information can be gained through the analysis of the volume fraction dependence of the compressibility.

  14. Faint Infrared-Excess Field Galaxies FROGs

    CERN Document Server

    Moustakas, L A; Zepf, S E; Bunker, A J

    1997-01-01

    Deep near-infrared and optical imaging surveys in the field reveal a curious population of galaxies that are infrared-bright (I-K>4), yet with relatively blue optical colors (V-I20, is high enough that if placed at z>1 as our models suggest, their space densities are about one-tenth of phi-*. The colors of these ``faint red outlier galaxies'' (fROGs) may derive from exceedingly old underlying stellar populations, a dust-embedded starburst or AGN, or a combination thereof. Determining the nature of these fROGs, and their relation with the I-K>6 ``extremely red objects,'' has implications for our understanding of the processes that give rise to infrared-excess galaxies in general. We report on an ongoing study of several targets with HST & Keck imaging and Keck/LRIS multislit spectroscopy.

  15. Desaturation of excess intramyocellular triacylglycerol in obesity

    DEFF Research Database (Denmark)

    Haugaard, S B; Madsbad, S; Mu, Huiling;

    2010-01-01

    diabetes (T2DM), body mass index (BMI)=35.5+/-0.8 kg m(-2)) and 25 men, age 49+/-2 years (20 obese including 6 T2DM, BMI=35.8+/-0.8 kg m(-2))), IMTG FA composition was determined by gas-liquid chromatography after separation from phospholipids by thin-layer chromatography. RESULTS: Independently of gender......OBJECTIVE: Excess intramyocellular triacylglycerol (IMTG), found especially in obese women, is slowly metabolized and, therefore, prone to longer exposure to intracellular desaturases. Accordingly, it was hypothesized that IMTG content correlates inversely with IMTG fatty acid (FA) saturation...... in sedentary subjects. In addition, it was validated if IMTG palmitic acid is associated with insulin resistance as suggested earlier. DESIGN: Cross-sectional human study. SUBJECTS: In skeletal muscle biopsies, which were obtained from sedentary subjects (34 women, age 48+/-2 years (27 obese including 7 type 2...

  16. Excess plutonium disposition using ALWR technology

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, A. (ed.); Buckner, M.R.; Radder, J.A.; Angelos, J.G.; Inhaber, H.

    1993-02-01

    The Office of Nuclear Energy of the Department of Energy chartered the Plutonium Disposition Task Force in August 1992. The Task Force was created to assess the range of practicable means of disposition of excess weapons-grade plutonium. Within the Task Force, working groups were formed to consider: (1) storage, (2) disposal,and(3) fission options for this disposition,and a separate group to evaluate nonproliferation concerns of each of the alternatives. As a member of the Fission Working Group, the Savannah River Technology Center acted as a sponsor for light water reactor (LWR) technology. The information contained in this report details the submittal that was made to the Fission Working Group of the technical assessment of LWR technology for plutonium disposition. The following aspects were considered: (1) proliferation issues, (2) technical feasibility, (3) technical availability, (4) economics, (5) regulatory issues, and (6) political acceptance.

  17. Invisible excess of sense in social interaction.

    Science.gov (United States)

    Koubová, Alice

    2014-01-01

    The question of visibility and invisibility in social understanding is examined here. First, the phenomenological account of expressive phenomena and key ideas of the participatory sense-making theory are presented with regard to the issue of visibility. These accounts plead for the principal visibility of agents in interaction. Although participatory sense-making does not completely rule out the existence of opacity and invisible aspects of agents in interaction, it assumes the capacity of agents to integrate disruptions, opacity and misunderstandings in mutual modulation. Invisibility is classified as the dialectical counterpart of visibility, i.e., as a lack of sense whereby the dynamics of perpetual asking, of coping with each other and of improvements in interpretation are brought into play. By means of empirical exemplification this article aims at demonstrating aspects of invisibility in social interaction which complement the enactive interpretation. Without falling back into Cartesianism, it shows through dramaturgical analysis of a practice called "(Inter)acting with the inner partner" that social interaction includes elements of opacity and invisibility whose role is performative. This means that opacity is neither an obstacle to be overcome with more precise understanding nor a lack of meaning, but rather an excess of sense, a "hiddenness" of something real that has an "active power" (Merleau-Ponty). In this way it contributes to on-going social understanding as a hidden potentiality that naturally enriches, amplifies and in part constitutes human participation in social interactions. It is also shown here that this invisible excess of sense already functions on the level of self-relationship due to the essential self-opacity and self-alterity of each agent of social interaction. The analysis consequently raises two issues: the question of the enactive ethical stance toward the alterity of the other and the question of the autonomy of the self

  18. Vitamin paradox in obesity: Deficiency or excess?

    Science.gov (United States)

    Zhou, Shi-Sheng; Li, Da; Chen, Na-Na; Zhou, Yiming

    2015-08-25

    Since synthetic vitamins were used to fortify food and as supplements in the late 1930s, vitamin intake has significantly increased. This has been accompanied by an increased prevalence of obesity, a condition associated with diabetes, hypertension, cardiovascular disease, asthma and cancer. Paradoxically, obesity is often associated with low levels of fasting serum vitamins, such as folate and vitamin D. Recent studies on folic acid fortification have revealed another paradoxical phenomenon: obesity exhibits low fasting serum but high erythrocyte folate concentrations, with high levels of serum folate oxidation products. High erythrocyte folate status is known to reflect long-term excess folic acid intake, while increased folate oxidation products suggest an increased folate degradation because obesity shows an increased activity of cytochrome P450 2E1, a monooxygenase enzyme that can use folic acid as a substrate. There is also evidence that obesity increases niacin degradation, manifested by increased activity/expression of niacin-degrading enzymes and high levels of niacin metabolites. Moreover, obesity most commonly occurs in those with a low excretory reserve capacity (e.g., due to low birth weight/preterm birth) and/or a low sweat gland activity (black race and physical inactivity). These lines of evidence raise the possibility that low fasting serum vitamin status in obesity may be a compensatory response to chronic excess vitamin intake, rather than vitamin deficiency, and that obesity could be one of the manifestations of chronic vitamin poisoning. In this article, we discuss vitamin paradox in obesity from the perspective of vitamin homeostasis.

  19. 46 CFR 45.65 - Excess sheer limitations.

    Science.gov (United States)

    2010-10-01

    ... _______ +Excess/−Deficiency AFT Sheer: Diff÷8 _______Excess/Deficiency Fwd. Half: FP 1 L/6-FP 3 L/3-FP 3 Midships... _______ +Excess/−Deficiency FWD Sheer: Diff÷8 _______Excess/Deficiency 1 L in Standard Sheer=L or 500 whichever is... less than 0.1 L before and abaft amidships, the decrease must be reduced by linear interpolation. (c...

  20. The intrinsic values and color excesses of (B-V) for 115 F-K supergiants

    Science.gov (United States)

    Kelsall, T.

    1972-01-01

    Color excesses in B-V are determined indirectly from a study of Stromgren's b-y color for a sample of F0 - K5 supergiants. The resulting E(B-V)'s are estimated to have an expected precision of + or - 0.05. With the calculated color excesses and the observed values of B-V given in various catalogs, the run of B-V with spectral type is obtained. This B-V/(spectral type) relationship is compared with those found previously by other investigators.

  1. 24 CFR 320.8 - Excess Yield Securities.

    Science.gov (United States)

    2010-04-01

    ... MORTGAGE-BACKED SECURITIES Pass-Through Type Securities § 320.8 Excess Yield Securities. (a) Definition. Excess Yield Securities are securities backed by the excess servicing income relating to mortgages underlying previously issued Ginnie Mae mortgage-backed securities. (b) GNMA guaranty. The Association...

  2. Estimating Cosmological Parameter Covariance

    CERN Document Server

    Taylor, Andy

    2014-01-01

    We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...

  3. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  4. Understanding the Code: keeping accurate records.

    Science.gov (United States)

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.

  5. The LHC diphoton excess as a W-ball

    CERN Document Server

    Arbuzov, B A

    2016-01-01

    We consider a possibility of the 750 GeV diphoton excess at the LHC to correspond to heavy $WW$ zero spin resonance. The resonance appears due to the would-be anomalous triple interaction of the weak bosons, which is defined by well-known coupling constant $\\lambda$. The $\\gamma\\gamma\\,\\,750\\, GeV$ anomaly may correspond to weak isotopic spin 0 pseudoscalar state. We obtain estimates for the effect, which qualitatively agree with ATLAS data. Effects are predicted in a production of $W^+ W^-, (Z,\\gamma) (Z,\\gamma)$ via resonance $X_{PS}$ with $M_{PS} \\simeq 750\\,GeV$, which could be reliably checked at the upgraded LHC at $\\sqrt{s}\\,=\\,13\\, TeV$. In the framework of an approach to the spontaneous generation of the triple anomalous interaction its coupling constant is estimated to be $\\lambda = -\\,0.020\\pm 0.005$ in an agreement with existing restrictions. Specific prediction of the hypothesis is the significant effect in decay channel $X_{PS} \\to \\gamma\\,l^+\\,l^-\\,(l = e,\\,\\mu)$, which branching ratio occurs t...

  6. A Comprehensive Census of Nearby Infrared Excess Stars

    Science.gov (United States)

    Cotten, Tara H.; Song, Inseok

    2016-07-01

    The conclusion of the Wide-Field Infrared Survey Explorer (WISE) mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as the James Webb Space Telescope. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and all-sky WISE (AllWISE) catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3σ or 5σ significance of excess in the mid- and far-infrared. Through procedures including spectral energy distribution fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 “Prime” infrared excess stars, of which 74 are new sources of excess, and >1200 are “Reserved” stars, of which 950 are new sources of excess. The main catalog of infrared excess stars are nearby, bright, and either demonstrate excess in more than one passband or have infrared spectroscopy confirming the infrared excess. This study identifies stars that display a spectral energy distribution suggestive of a secondary or post-protoplanetary generation of dust, and they are ideal targets for future optical and infrared imaging observations. The final catalogs of stars summarize the past work using infrared excess to detect dust disks, and with the most extensive compilation of infrared excess stars (˜1750) to date, we investigate various relationships among stellar and disk parameters.

  7. Sonolência excessiva Excessive daytime sleepiness

    Directory of Open Access Journals (Sweden)

    Lia Rita Azeredo Bittencourt

    2005-05-01

    Full Text Available A sonolência é uma função biológica, definida como uma probabilidade aumentada para dormir. Já a sonolência excessiva (SE, ou hipersonia, refere-se a uma propensão aumentada ao sono com uma compulsão subjetiva para dormir, tirar cochilos involuntários e ataques de sono, quando o sono é inapropriado. As principais causas de sonolência excessiva são a privação crônica de sono (sono insuficiente, a Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono (SAHOS, a narcolepsia, a Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros (SPI/MPM, Distúrbios do Ritmo Circadiano, uso de drogas e medicações e a hipersonia idiopática. As principais conseqüências são prejuízo no desempenho nos estudos, no trabalho, nas relações familiares e sociais, alterações neuropsicológicas e cognitivas e risco aumentado de acidentes. O tratamento da sonolência excessiva deve estar voltado para as causas específicas. Na privação voluntária do sono, aumentar o tempo de sono e higiene do sono, o uso do CPAP (Continuous Positive Airway Pressure na Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono, exercícios e agentes dopaminérgicos na Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros, fototerapia e melatonina nos Distúrbios do Ritmo Circadiano, retiradas de drogas que causam sonolência excessiva e uso de estimulantes da vigília.Sleepiness is a physiological function, and can be defined as increased propension to fall asleep. However, excessive sleepiness (ES or hypersomnia refer to an abnormal increase in the probability to fall asleep, to take involuntary naps, or to have sleep atacks, when sleep is not desired. The main causes of excessive sleepiness is chronic sleep deprivation, sleep apnea syndrome, narcolepsy, movement disorders during sleep, circadian sleep disorders, use of drugs and medications, or idiopathic hypersomnia. Social, familial, work, and cognitive impairment are among the consequences of

  8. Posterior fixation suture and convergence excess esotropia.

    Science.gov (United States)

    Steffen; Auffarth; Kolling

    1998-09-01

    The present study investigates the results of Cuppers' 'Fadenoperation' in patients with non-accommodative convergence excess esotropia. Particular attention is given to postoperative eye alignment at distance fixation. Group 1 (n=96) included patients with a 'normal' convergence excess. The manifest near angles (mean ET 16.73 degrees +/- 6.33 degrees, range 4 degrees -33 degrees ) were roughly twice the size of the distance angles (mean ET 6.50 degrees +/- 3.62 degrees, range 0 degrees -14 degrees ). These patients were treated with a bilateral fadenoperation of the medial recti without additional eye muscle surgery. Three months after surgery, the mean postoperative angles were XT 0.5 degrees +/- 3.3 degrees (range XT 11 degrees -ET 5 degrees ) for distance fixation, and ET 2.7 degrees +/- 3.6 degrees (range XT 5 degrees -ET 14 degrees ) for near fixation, respectively. Postoperative convergent angles at near fixation >ET 10 degrees were present in two patients (1.9%). Group 2 (n=21) included patients with a mean preoperative distance angle of ET 9.2 degrees +/- 3.7 degrees (range 6 degrees -16 degrees ) and a mean preoperative near angle of ET 23.4 degrees +/- 3.1 degrees (range 16 degrees -31 degrees ). These patients were operated on with a bilateral fadenoperation of the medial recti and a simultaneous recession of one or both medial rectus muscles. Mean postoperative angles were XT 0.5 degrees +/- 4.6 degrees (range XT 12 degrees -ET 7 degrees ) for distance fixation and ET 1.4 degrees +/- 4.5 degrees (range XT 8 degrees -ET 13 degrees ) for near fixation, respectively. In this group, 2 patients (10.6%) had a postoperative exotropia >XT 5 degrees at distance fixation, and two patients had residual esotropia>ET 10 degrees at near fixation. Group 3 (n=17) included patients with a pronounced non-accommodative convergence excess. Near angle values (mean of 17.8 degrees +/- 5.3 degrees, range ET 7 degrees -26 degrees ) were several times higher than the distance

  9. Mechanisms for Reduced Excess Sludge Production in the Cannibal Process.

    Science.gov (United States)

    Labelle, Marc-André; Dold, Peter L; Comeau, Yves

    2015-08-01

    Reducing excess sludge production is increasingly attractive as a result of rising costs and constraints with respect to sludge treatment and disposal. A technology in which the mechanisms remain not well understood is the Cannibal process, for which very low sludge yields have been reported. The objective of this work was to use modeling as a means to characterize excess sludge production at a full-scale Cannibal facility by providing a long sludge retention time and removing trash and grit by physical processes. The facility was characterized by using its historical data, from discussion with the staff and by conducting a sampling campaign to prepare a solids inventory and an overall mass balance. At the evaluated sludge retention time of 400 days, the sum of the daily loss of suspended solids to the effluent and of the waste activated sludge solids contributed approximately equally to the sum of solids that are wasted daily as trash and grit from the solids separation module. The overall sludge production was estimated to be 0.14 g total suspended solids produced/g chemical oxygen demand removed. The essential functions of the Cannibal process for the reduction of sludge production appear to be to remove trash and grit from the sludge by physical processes of microscreening and hydrocycloning, respectively, and to provide a long sludge retention time, which allows the slow degradation of the "unbiodegradable" influent particulate organics (XU,Inf) and the endogenous residue (XE). The high energy demand of 1.6 kWh/m³ of treated wastewater at the studied facility limits the niche of the Cannibal process to small- to medium-sized facilities in which sludge disposal costs are high but electricity costs are low.

  10. Retinoic Acid Excess Impairs Amelogenesis Inducing Enamel Defects

    Science.gov (United States)

    Morkmued, Supawich; Laugel-Haushalter, Virginie; Mathieu, Eric; Schuhbaur, Brigitte; Hemmerlé, Joseph; Dollé, Pascal; Bloch-Zupan, Agnès; Niederreither, Karen

    2017-01-01

    Abnormalities of enamel matrix proteins deposition, mineralization, or degradation during tooth development are responsible for a spectrum of either genetic diseases termed Amelogenesis imperfecta or acquired enamel defects. To assess if environmental/nutritional factors can exacerbate enamel defects, we investigated the role of the active form of vitamin A, retinoic acid (RA). Robust expression of RA-degrading enzymes Cyp26b1 and Cyp26c1 in developing murine teeth suggested RA excess would reduce tooth hard tissue mineralization, adversely affecting enamel. We employed a protocol where RA was supplied to pregnant mice as a food supplement, at a concentration estimated to result in moderate elevations in serum RA levels. This supplementation led to severe enamel defects in adult mice born from pregnant dams, with most severe alterations observed for treatments from embryonic day (E)12.5 to E16.5. We identified the enamel matrix proteins enamelin (Enam), ameloblastin (Ambn), and odontogenic ameloblast-associated protein (Odam) as target genes affected by excess RA, exhibiting mRNA reductions of over 20-fold in lower incisors at E16.5. RA treatments also affected bone formation, reducing mineralization. Accordingly, craniofacial ossification was drastically reduced after 2 days of treatment (E14.5). Massive RNA-sequencing (RNA-seq) was performed on E14.5 and E16.5 lower incisors. Reductions in Runx2 (a key transcriptional regulator of bone and enamel differentiation) and its targets were observed at E14.5 in RA-exposed embryos. RNA-seq analysis further indicated that bone growth factors, extracellular matrix, and calcium homeostasis were perturbed. Genes mutated in human AI (ENAM, AMBN, AMELX, AMTN, KLK4) were reduced in expression at E16.5. Our observations support a model in which elevated RA signaling at fetal stages affects dental cell lineages. Thereafter enamel protein production is impaired, leading to permanent enamel alterations. PMID:28111553

  11. Misperceived pre-pregnancy body weight status predicts excessive gestational weight gain: findings from a US cohort study

    Directory of Open Access Journals (Sweden)

    Rifas-Shiman Sheryl L

    2008-12-01

    Full Text Available Abstract Background Excessive gestational weight gain promotes poor maternal and child health outcomes. Weight misperception is associated with weight gain in non-pregnant women, but no data exist during pregnancy. The purpose of this study was to examine the association of misperceived pre-pregnancy body weight status with excessive gestational weight gain. Methods At study enrollment, participants in Project Viva reported weight, height, and perceived body weight status by questionnaire. Our study sample comprised 1537 women who had either normal or overweight/obese pre-pregnancy BMI. We created 2 categories of pre-pregnancy body weight status misperception: normal weight women who identified themselves as overweight ('overassessors' and overweight/obese women who identified themselves as average or underweight ('underassessors'. Women who correctly perceived their body weight status were classified as either normal weight or overweight/obese accurate assessors. We performed multivariable logistic regression to determine the odds of excessive gestational weight gain according to 1990 Institute of Medicine guidelines. Results Of the 1029 women with normal pre-pregnancy BMI, 898 (87% accurately perceived and 131 (13% overassessed their weight status. 508 women were overweight/obese, of whom 438 (86% accurately perceived and 70 (14% underassessed their pre-pregnancy weight status. By the end of pregnancy, 823 women (54% gained excessively. Compared with normal weight accurate assessors, the adjusted odds of excessive gestational weight gain was 2.0 (95% confidence interval [CI]: 1.3, 3.0 in normal weight overassessors, 2.9 (95% CI: 2.2, 3.9 in overweight/obese accurate assessors, and 7.6 (95% CI: 3.4, 17.0 in overweight/obese underassessors. Conclusion Misperceived pre-pregnancy body weight status was associated with excessive gestational weight gain among both normal weight and overweight/obese women, with the greatest likelihood of excessive

  12. On the Fluctuation Induced Excess Conductivity in Stainless Steel Sheathed MgB2 Tapes

    Directory of Open Access Journals (Sweden)

    Suchitra Rajput

    2013-01-01

    Full Text Available We report on the analyses of fluctuation induced excess conductivity in the - behavior in the in situ prepared MgB2 tapes. The scaling functions for critical fluctuations are employed to investigate the excess conductivity of these tapes around transition. Two scaling models for excess conductivity in the absence of magnetic field, namely, first, Aslamazov and Larkin model, second, Lawrence and Doniach model, have been employed for the study. Fitting the experimental - data with these models indicates the three-dimensional nature of conduction of the carriers as opposed to the 2D character exhibited by the HTSCs. The estimated amplitude of coherence length from the fitted model is ~21 Å.

  13. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2014-04-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity (RH together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal timescales. Furthermore, we review arguments for an interpretation of long-term palaeoclimatic d changes in terms of moisture source temperature, and we conclude that there remains no sufficient evidence that would justify to neglect the influence of RH on such palaeoclimatic d variations. Hence, we suggest that either the interpretation of d variations in palaeorecords should be adapted to reflect climatic influences on RH during evaporation, in particular atmospheric circulation changes, or new arguments for an interpretation in terms of moisture source temperature will have to be provided based on future research.

  14. Optical excess of dim isolated neutron stars

    Science.gov (United States)

    Ertan, Ü.; ćalışkan, Ş.; Alpar, M. A.

    2017-09-01

    The optical excess in the spectra of dim isolated neutron stars (XDINs) is a significant fraction of their rotational energy loss rate. This is strikingly different from the situation in isolated radio pulsars. We investigate this problem in the framework of the fallback disc model. The optical spectra can be powered by magnetic stresses on the innermost disc matter, as the energy dissipated is emitted as blackbody radiation mainly from the inner rim of the disc. In the fallback disc model, XDINs are the sources evolving in the propeller phase with similar torque mechanisms. In this model, the ratio of the total magnetic work that heats up the inner disc matter is expected to be similar for different XDINs. Optical luminosities that are calculated consistently with the optical spectra and the theoretical constraints on the inner disc radii give very similar ratios of the optical luminosity to the rotational energy loss rate for all these sources. These ratios indicate that a significant fraction of the magnetic torque heats up the disc matter while the remaining fraction expels disc matter from the system. For XDINs, the contribution of heating by X-ray irradiation to the optical luminosity is negligible in comparison with the magnetic heating. The correlation we expect between the optical luminosities and the rotational energy loss rates of XDINs can be a property of the systems with low X-ray luminosities, in particular those in the propeller phase.

  15. Armodafinil in the treatment of excessive sleepiness.

    Science.gov (United States)

    Rosenberg, Russell; Bogan, Richard

    2010-01-01

    Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/ wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral measures, mechanical devices, and pharmacologic agents. This review explores the evidence supporting the use of armodafinil to treat ES associated with OSA, SWD, and narcolepsy. Armodafinil is an oral non-amphetamine wake-promoting agent, the R-isomer of racemic modafinil. Armodafinil and modafinil share many clinical and pharmacologic properties and are distinct from central nervous system stimulants; however, the mechanisms of action of modafinil and armodafinil are poorly characterized. Compared with modafinil, the wake-promoting effects of armodafinil persist later in the day. It is for this reason that armodafinil may be a particularly appropriate therapy for patients with persistent ES due to OSA, SWD, or narcolepsy.

  16. Vergence adaptation in subjects with convergence excess.

    Science.gov (United States)

    Nilsson, Maria; Brautaset, Rune L

    2011-03-01

    The main purpose of this study was to evaluate the vergence adaptive ability in subjects diagnosed with convergence excess (CE) phoria (ie, subjects with an esophoric shift from distance to near but without an intermittent tropia at near). Vergence adaptation was measured at far and near with both base-in and base-out prisms using a "flashed" Maddox rod technique in 20 control subjects and 16 subjects with CE. In addition, accommodative adaptation and the stimulus AC/A and CA/C cross-links were measured. The AC/A and CA/C ratios were found to be high and low, respectively, and accommodative adaptation was found to be reduced in CE subjects as compared with the controls (P<0.005), all as predicted by the present theory. However, vergence adaptive ability was found to be reduced in the CE subjects at both distance and near and in response to both base-in and base-out prisms (P=0.002). This finding is not in accordance with and is difficult to reconcile with the present theory of CE.

  17. The Neurometabolic Fingerprint of Excessive Alcohol Drinking

    Science.gov (United States)

    Meinhardt, Marcus W; Sévin, Daniel C; Klee, Manuela L; Dieter, Sandra; Sauer, Uwe; Sommer, Wolfgang H

    2015-01-01

    ‘Omics' techniques are widely used to identify novel mechanisms underlying brain function and pathology. Here we applied a novel metabolomics approach to further ascertain the role of frontostriatal brain regions for the expression of addiction-like behaviors in rat models of alcoholism. Rats were made alcohol dependent via chronic intermittent alcohol vapor exposure. Following a 3-week abstinence period, rats had continuous access to alcohol in a two-bottle, free-choice paradigm for 7 weeks. Nontargeted flow injection time-of-flight mass spectrometry was used to assess global metabolic profiles of two cortical (prelimbic and infralimbic) and two striatal (accumbens core and shell) brain regions. Alcohol consumption produces pronounced global effects on neurometabolomic profiles leading to a clear separation of metabolic phenotypes between treatment groups, particularly. Further comparisons of regional tissue levels of various metabolites, most notably dopamine and Met-enkephalin, allow the extrapolation of alcohol consumption history. Finally, a high-drinking metabolic fingerprint was identified indicating a distinct alteration of central energy metabolism in the accumbens shell of excessively drinking rats that could indicate a so far unrecognized pathophysiological mechanism in alcohol addiction. In conclusion, global metabolic profiling from distinct brain regions by mass spectrometry identifies profiles reflective of an animal's drinking history and provides a versatile tool to further investigate pathophysiological mechanisms in alcohol dependence. PMID:25418809

  18. Quirky Explanations for the Diphoton Excess

    CERN Document Server

    Curtin, David

    2015-01-01

    We propose two simple quirk models to explain the recently reported 750 GeV diphoton excesses at ATLAS and CMS. It is already well-known that a real singlet scalar $\\phi$ with Yukawa couplings $\\phi \\bar X X$ to vector-like fermions $X$ with mass $m_X > m_\\phi/2$ can easily explain the observed signal, provided $X$ carries both SM color and electric charge. We instead consider first the possibility that the pair production of a fermion, charged under both SM gauge groups and a confining $SU(3)_v$ gauge group, is responsible. If pair produced it forms a quirky bound state, which promptly annihilates into gluons, photons, or v-gluons. This has the advantage of being able to explain a sizable width for the diphoton resonance, but is already in some tension with existing displaced searches and dijet resonance bounds. We therefore propose a hybrid Quirk-Scalar model, in which the fermion of the simple $\\phi \\bar X X$ toy model is charged under the additional $SU(3)_v$ confining gauge group. Constraints on the new ...

  19. Di-photon excess at LHC and the gamma ray excess at the Galactic Centre

    Energy Technology Data Exchange (ETDEWEB)

    Hektor, Andi [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia)

    2016-07-25

    Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact of cosmic antiproton flux measurement by the AMS-02 experiment.

  20. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions.

    Science.gov (United States)

    Khawam, Edward; Abiad, Bachir; Boughannam, Alaa; Saade, Joanna; Alameddine, Ramzi

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies of the fusional amplitudes. Our purpose is to show that numerous factors, other than anomalies in the AC/A ratio or anomalies in the fusional conv. or divergence amplitudes, can contaminate either the distance or the near deviations. This results in significant discrepancies between the distance and the near deviations despite a normal AC/A ratio and normal fusional amplitudes, leading to erroneous diagnoses and inappropriate treatment models.

  1. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions

    Directory of Open Access Journals (Sweden)

    Edward Khawam

    2015-01-01

    Full Text Available Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies of the fusional amplitudes. Our purpose is to show that numerous factors, other than anomalies in the AC/A ratio or anomalies in the fusional conv. or divergence amplitudes, can contaminate either the distance or the near deviations. This results in significant discrepancies between the distance and the near deviations despite a normal AC/A ratio and normal fusional amplitudes, leading to erroneous diagnoses and inappropriate treatment models.

  2. Spectrum of excess mortality due to carbapenem-resistant Klebsiella pneumoniae infections.

    Science.gov (United States)

    Hauck, C; Cober, E; Richter, S S; Perez, F; Salata, R A; Kalayjian, R C; Watkins, R R; Scalera, N M; Doi, Y; Kaye, K S; Evans, S; Fowler, V G; Bonomo, R A; van Duin, D

    2016-06-01

    Patients infected or colonized with carbapenem-resistant Klebsiella pneumoniae (CRKp) are often chronically and acutely ill, which results in substantial mortality unrelated to infection. Therefore, estimating excess mortality due to CRKp infections is challenging. The Consortium on Resistance against Carbapenems in K. pneumoniae (CRACKLE) is a prospective multicenter study. Here, patients in CRACKLE were evaluated at the time of their first CRKp bloodstream infection (BSI), pneumonia or urinary tract infection (UTI). A control cohort of patients with CRKp urinary colonization without CRKp infection was constructed. Excess hospital mortality was defined as mortality in cases after subtracting mortality in controls. In addition, the adjusted hazard ratios (aHR) for time-to-hospital-mortality at 30 days associated with infection compared with colonization were calculated in Cox proportional hazard models. In the study period, 260 patients with CRKp infections were included in the BSI (90 patients), pneumonia (49 patients) and UTI (121 patients) groups, who were compared with 223 controls. All-cause hospital mortality in controls was 12%. Excess hospital mortality was 27% in both patients with BSI and those with pneumonia. Excess hospital mortality was not observed in patients with UTI. In multivariable analyses, BSI and pneumonia compared with controls were associated with aHR of 2.59 (95% CI 1.52-4.50, p pneumonia is associated with the highest excess hospital mortality. Patients with BSI have slightly lower excess hospital mortality rates, whereas excess hospital mortality was not observed in hospitalized patients with UTI.

  3. Submillimeter to centimeter excess emission from the Magellanic Clouds. II. On the nature of the excess

    CERN Document Server

    Bot, Caroline; Paradis, Déborah; Bernard, Jean-Philippe; Lagache, Guilaine; Israel, Frank P; Wall, William F

    2010-01-01

    Dust emission at submm to cm wavelengths is often simply the Rayleigh-Jeans tail of dust particles at thermal equilibrium and is used as a cold mass tracer in various environments including nearby galaxies. However, well-sampled spectral energy distributions of the nearby, star-forming Magellanic Clouds have a pronounced (sub-)millimeter excess (Israel et al., 2010). This study attempts to confirm the existence of such a millimeter excess above expected dust, free-free and synchrotron emission and to explore different possibilities for its origin. We model NIR to radio spectral energy distributions of the Magellanic Clouds with dust, free-free and synchrotron emission. A millimeter excess emission is confirmed above these components and its spectral shape and intensity are analysed in light of different scenarios: very cold dust, Cosmic Microwave Background (CMB) fluctuations, a change of the dust spectral index and spinning dust emission. We show that very cold dust or CMB fluctuations are very unlikely expl...

  4. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality

  5. On the incidence of \\textit{WISE} infrared excess among solar analog, twin and sibling stars

    CERN Document Server

    Costa, Antônio D; Leão, Izan C; Lima, José E; da Silva, Danielly Freire; de Freitas, Daniel B; De Medeiros, José R

    2016-01-01

    This study presents a search for IR excess in the 3.4, 4.6, 12 and 22 $\\mu$m bands in a sample of 216 targets, composed of solar sibling, twin and analog stars observed by the \\textit{WISE} mission. In general, an infrared excess suggests the existence of warm dust around a star. We detected 12 $\\mu$m and/or 22 $\\mu$m excesses at the 3$\\sigma$ level of confidence in five solar analog stars, corresponding to a frequency of 4.1 $\\%$ of the entire sample of solar analogs analyzed, and in one out of 29 solar sibling candidates, confirming previous studies. The estimation of the dust properties shows that the sources with infrared excesses possess circumstellar material with temperatures that, within the uncertainties, are similar to that of the material found in the asteroid belt in our solar system. No photospheric flux excess was identified at the W1 (3.4 $\\mu$m) and W2 (4.6 $\\mu$m) \\textit{WISE} bands, indicating that, in the majority of stars of the present sample, no detectable dust is generated. Interesting...

  6. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  7. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  8. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  9. Factors influencing excessive daytime sleepiness in adolescents

    Directory of Open Access Journals (Sweden)

    Thiago de Souza Vilela

    2016-04-01

    Full Text Available Abstract Objective: Sleep deprivation in adolescents has lately become a health issue that tends to increase with higher stress prevalence, extenuating routines, and new technological devices that impair adolescents' bedtime. Therefore, this study aimed to assess the excessive sleepiness frequency and the factors that might be associated to it in this population. Methods: The cross-sectional study analyzed 531 adolescents aged 10–18 years old from two private schools and one public school. Five questionnaires were applied: the Cleveland Adolescent Sleepiness Questionnaire; the Sleep Disturbance Scale for Children; the Brazilian Economic Classification Criteria; the General Health and Sexual Maturation Questionnaire; and the Physical Activity Questionnaire. The statistical analyses were based on comparisons between schools and sleepiness and non-sleepiness groups, using linear correlation and logistic regression. Results: Sleep deprivation was present in 39% of the adolescents; sleep deficit was higher in private school adolescents (p < 0.001, and there was a positive correlation between age and sleep deficit (p < 0.001; r = 0.337. Logistic regression showed that older age (p = 0.002; PR: 1.21 [CI: 1.07–1.36] and higher score level for sleep hyperhidrosis in the sleep disturbance scale (p = 0.02; PR: 1.16 [CI: 1.02–1.32] were risk factors for worse degree of sleepiness. Conclusions: Sleep deficit appears to be a reality among adolescents; the results suggest a higher prevalence in students from private schools. Sleep deprivation is associated with older age in adolescents and possible presence of sleep disorders, such as sleep hyperhidrosis.

  10. Implication of zinc excess on soil health.

    Science.gov (United States)

    Wyszkowska, Jadwiga; Boros-Lajszner, Edyta; Borowik, Agata; Baćmaga, Małgorzata; Kucharski, Jan; Tomkiel, Monika

    2016-01-01

    This study was undertaken to evaluate zinc's influence on the resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease. The experiment was conducted in a greenhouse of the University of Warmia and Mazury (UWM) in Olsztyn, Poland. Plastic pots were filled with 3 kg of sandy loam with pHKCl - 7.0 each. The experimental variables were: zinc applied to soil at six doses: 100, 300, 600, 1,200, 2,400 and 4,800 mg of Zn(2+) kg(-1) in the form of ZnCl2 (zinc chloride), and species of plant: oat (Avena sativa L.) cv. Chwat and white mustard (Sinapis alba) cv. Rota. Soil without the addition of zinc served as the control. During the growing season, soil samples were subjected to microbiological analyses on experimental days 25 and 50 to determine the abundance of organotrophic bacteria, actinomyces and fungi, and the activity of dehydrogenases, catalase and urease, which provided a basis for determining the soil resistance index (RS). The physicochemical properties of soil were determined after harvest. The results of this study indicate that excessive concentrations of zinc have an adverse impact on microbial growth and the activity of soil enzymes. The resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease decreased with an increase in the degree of soil contamination with zinc. Dehydrogenases were most sensitive and urease was least sensitive to soil contamination with zinc. Zinc also exerted an adverse influence on the physicochemical properties of soil and plant development. The growth of oat and white mustard plants was almost completely inhibited in response to the highest zinc doses of 2,400 and 4,800 mg Zn(2+) kg(-1).

  11. Sub-millimeter to centimeter excess emission from the Magellanic Clouds. I. Global spectral energy distribution

    CERN Document Server

    Israel, F P; Raban, D; Reach, W T; Bot, C; Oonk, J B R; Ysard, N; Bernard, J P

    2010-01-01

    In order to reconstruct the global SEDs of the Magellanic Clouds over eight decades in spectral range, we combined literature flux densities representing the entire LMC and SMC respectively, and complemented these with maps extracted from the WMAP and COBE databases covering the missing the 23--90 GHz (13--3.2 mm) and the poorly sampled 1.25--250 THz (240--1.25 micron). We have discovered a pronounced excess of emission from both Magellanic Clouds, but especially the SMC, at millimeter and sub-millimeter wavelengths. We also determined accurate thermal radio fluxes and very low global extinctions for both LMC and SMC. Possible explanations are briefly considered but as long as the nature of the excess emission is unknown, the total dust masses and gas-to-dust ratios of the Magellanic Clouds cannot reliably be determined.

  12. Accurately determining log and bark volumes of saw logs using high-resolution laser scan data

    Science.gov (United States)

    R. Edward Thomas; Neal D. Bennett

    2014-01-01

    Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...

  13. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  14. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  15. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  16. Clinical and polysomnographic characteristics of excessive daytime sleepiness in children.

    Science.gov (United States)

    Lee, Jiwon; Na, Geonyoub; Joo, Eun Yeon; Lee, Munhyang; Lee, Jeehun

    2017-08-18

    This study aimed to delineate the clinical and polysomnography (PSG) characteristics of sleep disorders in children with excessive daytime sleepiness (EDS). Between February 2002 and June 2015, 622 pediatric patients with EDS were evaluated with overnight PSG and the Multiple Sleep Latency Test at the Samsung Medical Center. The medical records; questionnaire responses about depression, sleepiness, sleep habits; and sleep study data of 133 patients without obstructive sleep apnea (OSA) were reviewed retrospectively. The patients (63 girls, 70 boys) slept for an average of 7 h 30 min and 8 h 44 min on weekdays and weekends, respectively. The mean Epworth Sleepiness Scale score was 11.01 ± 4.09 and did not differ significantly among sleep disorders. Among the 102 patients who completed the depression questionnaire, 53 showed depressive feelings, which were moderate or severe in 39, with no significant differences among specific sleep disorders. Thirty-four patients exhibited normal PSG results. Seventeen of them were concluded as not having any sleep disorders, and the others as having delayed sleep phase disorder (DSPD). Narcolepsy (n = 78) was the most common disorder, followed by DSPD (n = 17) and idiopathic hypersomnia (n = 12). Pediatric patients with EDS had various sleep disorders and some did not have any sleep disorder despite EDS. More than half the patients with EDS showed depressive feelings affecting their daily lives. For pediatric patients with EDS, a systematic diagnostic approach including questionnaires for sleep habits and emotion and PSG is essential for accurate diagnosis and treatment.

  17. Excess Weapons Plutonium Immobilization in Russia

    Energy Technology Data Exchange (ETDEWEB)

    Jardine, L.; Borisov, G.B.

    2000-04-15

    The joint goal of the Russian work is to establish a full-scale plutonium immobilization facility at a Russian industrial site by 2005. To achieve this requires that the necessary engineering and technical basis be developed in these Russian projects and the needed Russian approvals be obtained to conduct industrial-scale immobilization of plutonium-containing materials at a Russian industrial site by the 2005 date. This meeting and future work will provide the basis for joint decisions. Supporting R&D projects are being carried out at Russian Institutes that directly support the technical needs of Russian industrial sites to immobilize plutonium-containing materials. Special R&D on plutonium materials is also being carried out to support excess weapons disposition in Russia and the US, including nonproliferation studies of plutonium recovery from immobilization forms and accelerated radiation damage studies of the US-specified plutonium ceramic for immobilizing plutonium. This intriguing and extraordinary cooperation on certain aspects of the weapons plutonium problem is now progressing well and much work with plutonium has been completed in the past two years. Because much excellent and unique scientific and engineering technical work has now been completed in Russia in many aspects of plutonium immobilization, this meeting in St. Petersburg was both timely and necessary to summarize, review, and discuss these efforts among those who performed the actual work. The results of this meeting will help the US and Russia jointly define the future direction of the Russian plutonium immobilization program, and make it an even stronger and more integrated Russian program. The two objectives for the meeting were to: (1) Bring together the Russian organizations, experts, and managers performing the work into one place for four days to review and discuss their work with each other; and (2) Publish a meeting summary and a proceedings to compile reports of all the excellent

  18. Excess entropy production in quantum system: Quantum master equation approach

    OpenAIRE

    Nakajima, Satoshi; Tokura, Yasuhiro

    2016-01-01

    For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess...

  19. 46 CFR 154.550 - Excess flow valve: Bypass.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Excess flow valve: Bypass. 154.550 Section 154.550... and Process Piping Systems § 154.550 Excess flow valve: Bypass. If the excess flow valve allowed under § 154.532(b) has a bypass, the bypass must be of 1.0 mm (0.0394 in.) or less in diameter. Cargo Hose...

  20. [Spectroscopy technique and ruminant methane emissions accurate inspecting].

    Science.gov (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun

    2009-03-01

    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  1. The tangential velocity excess of the Milky Way satellites

    Science.gov (United States)

    Cautun, Marius; Frenk, Carlos S.

    2017-06-01

    We estimate the systemic orbital kinematics of the Milky Way classical satellites and compare them with predictions from the Λ cold dark matter (ΛCDM) model derived from a semi-analytical galaxy formation model applied to high-resolution cosmological N-body simulations. We find that the Galactic satellite system is atypical of ΛCDM systems. The subset of 10 Galactic satellites with proper motion measurements has a velocity anisotropy, β = -2.2 ± 0.4, which lies in the 2.9 per cent tail of the ΛCDM distribution. Individually, the Milky Way satellites have radial velocities that are lower than expected for their proper motions, with 9 out of the 10 having at most 20 per cent of their orbital kinetic energy invested in radial motion. Such extreme values are expected in only 1.5 per cent of ΛCDM satellites systems. In the standard cosmological model, this tangential motion excess is unrelated to the existence of a Galactic 'disc of satellites'. We present theoretical predictions for larger satellite samples that may become available as more proper motion measurements are obtained.

  2. Analysis of factors associated with excess weight in school children

    Science.gov (United States)

    Pinto, Renata Paulino; Nunes, Altacílio Aparecido; de Mello, Luane Marques

    2016-01-01

    Abstract Objective: To determine the prevalence of overweight and obesity in schoolchildren aged 10 to 16 years and its association with dietary and behavioral factors. Methods: Cross-sectional study that evaluated 505 adolescents using a structured questionnaire and anthropometric data. The data was analyzed through the T Test for independent samples and Mann-Whitney Test to compare means and medians, respectively, and Chi2 Test for proportions. Prevalence ratio (RP) and the 95% confidence interval was used to estimate the degree of association between variables. The logistic regression was employed to adjust the estimates to confounding factors. The significance level of 5% was considered for all analysis. Results: Excess weight was observed in 30.9% of the schoolchildren: 18.2% of overweight and 12.7% of obesity. There was no association between weight alterations and dietary/behavioral habits in the bivariate and multivariate analyses. However, associations were observed in relation to gender. Daily consumption of sweets [PR=0.75 (0.64-0.88)] and soft drinks [PR=0.82 (0.70-0.97)] was less frequent among boys; having lunch daily was slightly more often reported by boys [OR=1.11 (1.02-1.22)]. Physical activity practice of (≥3 times/week) was more often mentioned by boys and the association measures disclosed two-fold more physical activity in this group [PR=2.04 (1.56-2.67)] when compared to girls. Approximately 30% of boys and 40% of girls stated they did not perform activities requiring energy expenditure during free periods, with boys being 32% less idle than girls [PR=0.68 (0.60-0.76)]. Conclusions: A high prevalence of both overweight and obesity was observed, as well as unhealthy habits in the study population, regardless of the presence of weight alterations. Health promotion strategies in schools should be encouraged, in order to promote healthy habits and behaviors among all students. PMID:27321919

  3. Premium subsidies for health insurance: excessive coverage vs. adverse selection.

    Science.gov (United States)

    Selden, T M

    1999-12-01

    The tax subsidy for employment-related health insurance can lead to excessive coverage and excessive spending on medical care. Yet, the potential also exists for adverse selection to result in the opposite problem-insufficient coverage and underconsumption of medical care. This paper uses the model of Rothschild and Stiglitz (R-S) to show that a simple linear premium subsidy can correct market failure due to adverse selection. The optimal linear subsidy balances welfare losses from excessive coverage against welfare gains from reduced adverse selection. Indeed, a capped premium subsidy may mitigate adverse selection without creating incentives for excessive coverage.

  4. Excessive erythrocytosis, chronic mountain sickness, and serum cobalt levels.

    Science.gov (United States)

    Jefferson, J Ashley; Escudero, Elizabeth; Hurtado, Maria-Elena; Pando, Jacqueline; Tapia, Rosario; Swenson, Erik R; Prchal, Josef; Schreiner, George F; Schoene, Robert B; Hurtado, Abdias; Johnson, Richard J

    2002-02-01

    In a subset of high-altitude dwellers, the appropriate erythrocytotic response becomes excessive and can result in chronic mountain sickness. We studied men with (study group) and without excessive erythrocytosis (packed-cell volume >65%) living in Cerro de Pasco, Peru (altitude 4300 m), and compared them with controls living in Lima, Peru (at sea-level). Toxic serum cobalt concentrations were detected in 11 of 21 (52%) study participants with excessive erythrocytosis, but were undetectable in high altitude or sea-level controls. In the mining community of Cerro de Pasco, cobalt toxicity might be an important contributor to excessive erythrocytosis.

  5. A Comprehensive Census of Nearby Infrared Excess Stars

    CERN Document Server

    Cotten, Tara H

    2016-01-01

    The conclusion of the WISE mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as JWST. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and AllWISE catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3$\\sigma$ or 5$\\sigma$ significance of excess in the mid- and far-infrared. Through procedures including SED fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false-positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 `Prime' infrared excess stars and $\\geq$1200 `Reserved' stars. The main catalog of infrared excess stars are nearby, b...

  6. Elevated influenza-related excess mortality in South African elderly individuals, 1998-2005.

    Science.gov (United States)

    Cohen, Cheryl; Simonsen, Lone; Kang, Jong-Won; Miller, Mark; McAnerney, Jo; Blumberg, Lucille; Schoub, Barry; Madhi, Shabir A; Viboud, Cécile

    2010-12-15

    Although essential to guide control measures, published estimates of influenza-related seasonal mortality for low- and middle-income countries are few. We aimed to compare influenza-related mortality among individuals aged ≥65 years in South Africa and the United States. We estimated influenza-related excess mortality due to all causes, pneumonia and influenza, and other influenza-associated diagnoses from monthly age-specific mortality data for 1998-2005 using a Serfling regression model. We controlled for between-country differences in population age structure and nondemographic factors (baseline mortality and coding practices) by generating age-standardized estimates and by estimating the percentage excess mortality attributable to influenza. Age-standardized excess mortality rates were higher in South Africa than in the United States: 545 versus 133 deaths per 100,000 population for all causes (Pdeaths per 100,000 population for pneumonia and influenza (P=.03). Standardization for nondemographic factors decreased but did not eliminate between-country differences; for example, the mean percentage of winter deaths attributable to influenza was 16% in South Africa and 6% in the United States (Pdiabetes, age-standardized excess death rates were 4-8-fold greater in South Africa than in the United States, and the percentage increase in winter deaths attributable to influenza was 2-4-fold higher. These data suggest that the impact of seasonal influenza on mortality among elderly individuals may be substantially higher in an African setting, compared with in the United States, and highlight the potential for influenza vaccination programs to decrease mortality.

  7. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  8. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every MRB_E2RF1...

  9. Clinical Impact of Antimicrobial Resistance in European Hospitals : Excess Mortality and Length of Hospital Stay Related to Methicillin-Resistant Staphylococcus aureus Bloodstream Infections

    NARCIS (Netherlands)

    de Kraker, Marlieke E. A.; Wolkewitz, Martin; Davey, Peter G.; Grundmann, Hajo

    2011-01-01

    Antimicrobial resistance is threatening the successful management of nosocomial infections worldwide. Despite the therapeutic limitations imposed by methicillin-resistant Staphylococcus aureus (MRSA), its clinical impact is still debated. The objective of this study was to estimate the excess mortal

  10. Excess enthalpy, density, and speed of sound determination for the ternary mixture (methyl tert-butyl ether + 1-butanol + n-hexane)

    Energy Technology Data Exchange (ETDEWEB)

    Mascato, Eva [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Mariano, Alejandra [Laboratorio de Fisicoquimica, Departamento de Quimica, Facultad de Ingenieria, Universidad Nacional del Comahue, 8300 Neuquen (Argentina); Pineiro, Manuel M. [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain)], E-mail: mmpineiro@uvigo.es; Legido, Jose Luis [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Paz Andrade, M.I. [Departamento de Fisica Aplicada, Facultade de Fisica, Universidade de Santiago de Compostela, E-15706 Santiago de Compostela (Spain)

    2007-09-15

    Density, ({rho}), and speed of sound, (u), from T = 288.15 to T = 308.15 K, and excess molar enthalpies, (h{sup E}) at T = 298.15 K, have been measured over the entire composition range for (methyl tert-butyl ether + 1-butanol + n-hexane). In addition, excess molar volumes, V{sup E}, and excess isentropic compressibility, {kappa}{sub s}{sup E}, were calculated from experimental data. Finally, experimental excess enthalpies results are compared with the estimations obtained by applying the group-contribution models of UNIFAC (in the versions of Dang and Tassios, Larsen et al., Gmehling et al.), and DISQUAC.

  11. Excess chemical potential of small solutes across water--membrane and water--hexane interfaces

    Science.gov (United States)

    Pohorille, A.; Wilson, M. A.

    1996-01-01

    The excess chemical potentials of five small, structurally related solutes, CH4, CH3F, CH2F2, CHF3, and CF4, across the water-glycerol 1-monooleate bilayer and water-hexane interfaces were calculated at 300, 310, and 340 K using the particle insertion method. The excess chemical potentials of nonpolar molecules (CH4 and CF4) decrease monotonically or nearly monotonically from water to a nonpolar phase. In contrast, for molecules that possess permanent dipole moments (CH3F, CH2F, and CHF3), the excess chemical potentials exhibit an interfacial minimum that arises from superposition of two monotonically and oppositely changing contributions: electrostatic and nonelectrostatic. The nonelectrostatic term, dominated by the reversible work of creating a cavity that accommodates the solute, decreases, whereas the electrostatic term increases across the interface from water to the membrane interior. In water, the dependence of this term on the dipole moment is accurately described by second order perturbation theory. To achieve the same accuracy at the interface, third order terms must also be included. In the interfacial region, the molecular structure of the solvent influences both the excess chemical potential and solute orientations. The excess chemical potential across the interface increases with temperature, but this effect is rather small. Our analysis indicates that a broad range of small, moderately polar molecules should be surface active at the water-membrane and water-oil interfaces. The biological and medical significance of this result, especially in relation to the mechanism of anesthetic action, is discussed.

  12. Estimation of physical parameters in induction motors

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  13. Accurate colorimetric feedback for RGB LED clusters

    Science.gov (United States)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  14. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  15. Synthesizing Accurate Floating-Point Formulas

    OpenAIRE

    Ioualalen, Arnault; Martel, Matthieu

    2013-01-01

    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...

  16. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  17. Accurate Control of Josephson Phase Qubits

    Science.gov (United States)

    2016-04-14

    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  18. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  19. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  20. Accurate integration of forced and damped oscillators

    OpenAIRE

    García Alonso, Fernando Luis; Cortés Molina, Mónica; Villacampa, Yolanda; Reyes Perales, José Antonio

    2016-01-01

    The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An applica...

  1. How dusty is alpha Centauri? Excess or non-excess over the infrared photospheres of main-sequence stars

    CERN Document Server

    Wiegert, J; Thébault, P; Olofsson, G; Mora, A; Bryden, G; Marshall, J P; Eiroa, C; Montesinos, B; Ardila, D; Augereau, J C; Aran, A Bayo; Danchi, W C; del Burgo, C; Ertel, S; Fridlund, M C W; Hajigholi, M; Krivov, A V; Pilbratt, G L; Roberge, A; White, G J

    2014-01-01

    [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses ...

  2. A Unified Theory of Rainfall Extremes, Rainfall Excesses, and IDF Curves

    Science.gov (United States)

    Veneziano, D.; Yoon, S.

    2012-04-01

    Extreme rainfall events are a key component of hydrologic risk management and design. Yet, a consistent mathematical theory of such extremes remains elusive. This study aims at laying new statistical foundations for such a theory. The quantities of interest are the distribution of the annual maximum, the distribution of the excess above a high threshold z, and the intensity-duration-frequency (IDF) curves. Traditionally, the modeling of annual maxima and excesses is based on extreme value (EV) and extreme excess (EE) theories. These theories establish that the maximum of n iid variables is attracted as n →∞ to a generalized extreme value (GEV) distribution with a certain index k and the distribution of the excess is attracted as z →∞ to a generalized Pareto distribution with the same index. The empirical value of k tends to decrease as the averaging duration d increases. To a first approximation, the IDF intensities scale with d and the return period T . Explanations for this approximate scaling behavior and theoretical predictions of the scaling exponents have emerged over the past few years. This theoretical work has been largely independent of that on the annual maxima and the excesses. Deviations from exact scaling include a tendency of the IDF curves to converge as d and T increase. To bring conceptual clarity and explain the above observations, we analyze the extremes of stationary multifractal measures, which provide good representations of rainfall within storms. These extremes follow from large deviation theory rather than EV/EE theory. A unified framework emerges that (a) encompasses annual maxima, excesses and IDF values without relying on EV or EE asymptotics, (b) predicts the index k and the IDF scaling exponents, (c) explains the dependence of k on d and the deviations from exact scaling of the IDF curves, and (d) explains why the empirical estimates of k tend to be positive (in the Frechet range) while, based on frequently assumed marginal

  3. Trends in the prevalence of excess dietary sodium intake - United States, 2003-2010.

    Science.gov (United States)

    2013-12-20

    Excess sodium intake can lead to hypertension, the primary risk factor for cardiovascular disease, which is the leading cause of U.S. deaths. Monitoring the prevalence of excess sodium intake is essential to provide the evidence for public health interventions and to track reductions in sodium intake, yet few reports exist. Reducing population sodium intake is a national priority, and monitoring the amount of sodium consumed adjusted for energy intake (sodium density or sodium in milligrams divided by calories) has been recommended because a higher sodium intake is generally accompanied by a higher calorie intake from food. To describe the most recent estimates and trends in excess sodium intake, CDC analyzed 2003-2010 data from the National Health and Nutrition Examination Survey (NHANES) of 34,916 participants aged ≥1 year. During 2007-2010, the prevalence of excess sodium intake, defined as intake above the Institute of Medicine tolerable upper intake levels (1,500 mg/day at ages 1-3 years; 1,900 mg at 4-8 years; 2,200 mg at 9-13 years; and 2,300 mg at ≥14 years) (3), ranged by age group from 79.1% to 95.4%. Small declines in the prevalence of excess sodium intake occurred during 2003-2010 in children aged 1-13 years, but not in adolescents or adults. Mean sodium intake declined slightly among persons aged ≥1 year, whereas sodium density did not. Despite slight declines in some groups, the majority of the U.S. population aged ≥1 year consumes excess sodium.

  4. Accurate finite element modeling of acoustic waves

    Science.gov (United States)

    Idesman, A.; Pham, D.

    2014-07-01

    In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.

  5. Estimation of Secondary Skin Cancer Risk Due To Electron Contamination in 18-MV LINAC-Based Prostate Radiotherapy

    Directory of Open Access Journals (Sweden)

    Seyed Mostafa Ghavami

    2016-12-01

    Full Text Available Introduction Accurate estimation of the skin-absorbed dose in external radiation therapy is essential to estimating the probability of secondary carcinogenesis induction Materials and Methods Electron contamination in prostate radiotherapy was investigated using the Monte Carlo (MC code calculation. In addition, field size dependence of the skin dose was assessed. Excess cancer risk induced by electron contamination was determined for the skin, surface dose, and prostate dose-volume histogram (DVH using MC calculation and analytical methods. Results MC calculations indicated that up to 80% of total electron contamination fluence was produced in the linear accelerator. At 5 mm below the skin surface, surface dose was estimated at 6%, 13%, 27%, and 38% for 5×5 cm2, 10×10 cm2, 20×20 cm2, and 40×40 cm2 field sizes, respectively. Relative dose at Dmax was calculated at 0.92% and 5.42% of the maximum dose for 5×5 cm2 and 40×40 cm2 field sizes, respectively. Excess absolute skin cancer risk was obtained at 2.96×10-4 (PY -1 for total 72 Gy. Differences in prostate and skin DVHs were 1.01% and 1.38%, respectively. Conclusion According to the results of this study, non-negligible doses are absorbed from contaminant electrons by the skin, which is associated with an excess risk of cancer induction.

  6. Excess of {sup 236}U in the northwest Mediterranean Sea

    Energy Technology Data Exchange (ETDEWEB)

    Chamizo, E., E-mail: echamizo@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); López-Lora, M., E-mail: mlopezlora@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); Bressac, M., E-mail: matthieu.bressac@utas.edu.au [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Institute for Marine and Antarctic Studies, University of Tasmania, Hobart, TAS (Australia); Levy, I., E-mail: I.N.Levy@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Pham, M.K., E-mail: M.Pham@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco)

    2016-09-15

    In this work, we present first {sup 236}U results in the northwestern Mediterranean. {sup 236}U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25′N, 07°52′E). The obtained {sup 236}U/{sup 238}U atom ratios in the dissolved phase, ranging from about 2 × 10{sup −9} at 100 m depth to about 1.5 × 10{sup −9} at 2350 m depth, indicate that anthropogenic {sup 236}U dominates the whole seawater column. The corresponding deep-water column inventory (12.6 ng/m{sup 2} or 32.1 × 10{sup 12} atoms/m{sup 2}) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5 ng/m{sup 2} or 13 × 10{sup 12} atoms/m{sup 2}), evidencing the influence of local or regional {sup 236}U sources in the western Mediterranean basin. On the other hand, the input of {sup 236}U associated to Saharan dust outbreaks is evaluated. An additional {sup 236}U annual deposition of about 0.2 pg/m{sup 2} based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that {sup 236}U stays in solution in seawater. Overall, this source accounts for about 0.1% of the {sup 236}U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin. - Highlights: • First {sup 236}U results in the northwest Mediterranean Sea are reported. • Anthropogenic {sup 236}U dominates the whole seawater column at DYFAMED station. • {sup 236}U deep-water column inventory exceeds by a factor of 2.5 the global fallout one. • Saharan dust intrusions are responsible for an annual

  7. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  8. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its

  9. Conversion Excess Coal Gas to Dimethyl Ether in Steel Works

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the technical progress of metallurgical industry, more excess gas will be produced in steel works. The feasibility of producing dimethyl ether by gas synthesis was discussed, which focused on marketing, energy balance, process design, economic evaluation, and environmental protection etc. DME was considered to be a new way to utilize excess coal gas in steel works.

  10. Aerophagia : Excessive Air Swallowing Demonstrated by Esophageal Impedance Monitoring

    NARCIS (Netherlands)

    Hemmink, Gerrit J. M.; Weusten, Bas L. A. M.; Bredenoord, Albert J.; Timmer, Robin; Smout, Andre J. P. M.

    2009-01-01

    BACKGROUND & AIMS: Patients with aerophagia suffer from the presence of an excessive volume of intestinal gas, which is thought to result from excessive air ingestion. However, this has not been shown thus far. The aim of this study was therefore to assess swallowing and air swallowing frequencies i

  11. Teachers' Knowledge of Anxiety and Identification of Excessive Anxiety in

    Science.gov (United States)

    Headley, Clea; Campbell, Marilyn A.

    2013-01-01

    This study examined primary school teachers' knowledge of anxiety and excessive anxiety symptoms in children. Three hundred and fifteen primary school teachers completed a questionnaire exploring their definitions of anxiety and the indications they associated with excessive anxiety in primary school children. Results showed that teachers had an…

  12. On Infrared Excesses Associated With Li-Rich K Giants

    CERN Document Server

    Rebull, Luisa M; Gibbs, John C; Deeb, J Elin; Larsen, Estefania; Black, David V; Altepeter, Shailyn; Bucksbee, Ethan; Cashen, Sarah; Clarke, Matthew; Datta, Ashwin; Hodgson, Emily; Lince, Megan

    2015-01-01

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using IRAS data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from WISE. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. We have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ~20 um (with possible excesses for 2 additional ...

  13. Hydration of proteins: excess partial volumes of water and proteins.

    Science.gov (United States)

    Sirotkin, Vladimir A; Komissarov, Igor A; Khadiullina, Aigul V

    2012-04-05

    High precision densitometry was applied to study the hydration of proteins. The hydration process was analyzed by the simultaneous monitoring of the excess partial volumes of water and the proteins in the entire range of water content. Five unrelated proteins (lysozyme, chymotrypsinogen A, ovalbumin, human serum albumin, and β-lactoglobulin) were used as models. The obtained data were compared with the excess partial enthalpies of water and the proteins. It was shown that the excess partial quantities are very sensitive to the changes in the state of water and proteins. At the lowest water weight fractions (w(1)), the changes of the excess functions can mainly be attributed to water addition. A transition from the glassy to the flexible state of the proteins is accompanied by significant changes in the excess partial quantities of water and the proteins. This transition appears at a water weight fraction of 0.06 when charged groups of proteins are covered. Excess partial quantities reach their fully hydrated values at w(1) > 0.5 when coverage of both polar and weakly interacting surface elements is complete. At the highest water contents, water addition has no significant effect on the excess quantities. At w(1) > 0.5, changes in the excess functions can solely be attributed to changes in the state of the proteins.

  14. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its danger

  15. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its danger

  16. 12 CFR 740.3 - Advertising of excess insurance.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Advertising of excess insurance. 740.3 Section... ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.3 Advertising of excess insurance. Any advertising that mentions share or savings account insurance provided by a party other than the NCUA...

  17. 24mum excesses of hot WDs - Evidence of dust disks?

    Energy Technology Data Exchange (ETDEWEB)

    Bilikova, Jana; Chu, Y-H; Gruendl, Robert [Astronomy Department, University of Illinois, 1002 W. Green St., Urbana, IL 61801 (United States); Su, Kate [Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tuscon, AZ 85721 (United States); Rauch, Thomas [Institute for Astronomy and Astrophysics, Kepler Center for Astro and Particle Physics, Eberhard Karls University, Tuebingen (Germany); Marco, Orsola De [American Museum of Natural History, Department of Astrophysics, Central Park West at 79th St., New York, NY 10024 (United States); Volk, Kevin, E-mail: jbiliko2@astro.uiuc.ed [Gemini Observatory, Northers Operations Center, 670 N. A ohoku Place, Hilo, HI 96720 (United States)

    2009-06-01

    Spitzer Space Telescope observations of the Helix Nebula's hot (T{sub eff} approx 110 000 K) central star revealed mid-IR excess emission consistent with a continuum emission from a dust disk located at 35-150 AU from the central white dwarf (WD), and the dust is most likely produced by collisions among Kuiper Belt-like objects (Su et al. 2007). To determine how common such dust disks are, we have carried out a Spitzer 24 mum survey of 72 hot WDs, and detected at least 7 WDs that exhibit clear IR excess, all of them still surrounded by planetary nebulae (PNe). Inspired by the prevalence of PN environment for hot WDs showing IR excesses, we have surveyed the Spitzer archive for more central stars of PN (CSPNs) with IR excesses; the search yields four cases in which CSPNs show excesses in 3.6-8.0 mum, and one additional case of 24 mum excess. We present the results of these two searches for dust-disk candidates, and discuss scenarios other than KBO collisions that need to be considered in explaining the observed near and/or mid-IR excess emission. These scenarios include unresolved companions, binary post-AGB evolution, and unresolved compact nebulosity. We describe planned follow-up observations aiming to help us distinguish between different origins of observed IR excesses.

  18. 26 CFR 1.162-8 - Treatment of excessive compensation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Treatment of excessive compensation. 1.162-8...-8 Treatment of excessive compensation. The income tax liability of the recipient in respect of an amount ostensibly paid to him as compensation, but not allowed to be deducted as such by the payor,...

  19. 30 CFR 75.323 - Actions for excessive methane.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Actions for excessive methane. 75.323 Section... excessive methane. (a) Location of tests. Tests for methane concentrations under this section shall be made.... (1) When 1.0 percent or more methane is present in a working place or an intake air course,...

  20. 41 CFR 101-27.103 - Acquisition of excess property.

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT 27.1-Stock Replenishment § 101-27.103 Acquisition of excess property. Except for inventories... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Acquisition of excess property. 101-27.103 Section 101-27.103 Public Contracts and Property Management Federal...

  1. A Practical Approach For Excess Bandwidth Distribution for EPONs

    KAUST Repository

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  2. The Role of Alcohol Advertising in Excessive and Hazardous Drinking.

    Science.gov (United States)

    Atkin, Charles K.; And Others

    1983-01-01

    Examined the influence of advertising on excessive and dangerous drinking in a survey of 1,200 adolescents and young adults who were shown advertisements depicting excessive consumption themes. Results indicated that advertising stimulates consumption levels, which leads to heavy drinking and drinking in dangerous situations. (JAC)

  3. ON INFRARED EXCESSES ASSOCIATED WITH Li-RICH K GIANTS

    Energy Technology Data Exchange (ETDEWEB)

    Rebull, Luisa M. [Spitzer Science Center (SSC) and Infrared Science Archive (IRSA), Infrared Processing and Analysis Center - IPAC, 1200 E. California Blvd., California Institute of Technology, Pasadena, CA 91125 (United States); Carlberg, Joleen K. [NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Gibbs, John C.; Cashen, Sarah; Datta, Ashwin; Hodgson, Emily; Lince, Megan [Glencoe High School, 2700 NW Glencoe Rd., Hillsboro, OR 97124 (United States); Deeb, J. Elin [Bear Creek High School, 9800 W. Dartmouth Pl., Lakewood, CO 80227 (United States); Larsen, Estefania; Altepeter, Shailyn; Bucksbee, Ethan; Clarke, Matthew [Millard South High School, 14905 Q St., Omaha, NE 68137 (United States); Black, David V., E-mail: rebull@ipac.caltech.edu [Walden School of Liberal Arts, 4230 N. University Ave., Provo, UT 84604 (United States)

    2015-10-15

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and Li abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be Li-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ∼20 μm (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few Li-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, {sup 12}C/{sup 13}C. IR excesses by 20 μm, though relatively rare, are at least twice as common among our sample of Li-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported by theoretical calculations. Conversely, the

  4. On Infrared Excesses Associated with Li-Rich K Giants

    Science.gov (United States)

    Rebull, Luisa M.; Carlberg, Joleen K.; Gibbs, John C.; Deeb, J. Elin; Larsen, Estefania; Black, David V.; Altepeter, Shailyn; Bucksbee, Ethan; Cashen, Sarah; Clarke, Matthew; Datta, Ashwin; Hodgson, Emily; Lince, Megan

    2015-01-01

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant lithium and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched lithium, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and lithium abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be lithium-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by approximately 20 micrometers (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few lithium-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, 12C/13C. IR excesses by 20 micrometers, though relatively rare, are at least twice as common among our sample of lithium-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported

  5. Mechanisms linking excess adiposity and carcinogenesis promotion

    Directory of Open Access Journals (Sweden)

    Ana I. Pérez-Hernández

    2014-05-01

    Full Text Available Obesity constitutes one of the most important metabolic diseases being associated to insulin resistance development and increased cardiovascular risk. Association between obesity and cancer has also been well-established for several tumor types, such as breast cancer in postmenopausal women, colorectal and prostate cancer. Cancer is the first death cause in developed countries and the second one in developing countries, with high incidence rates around the world. Furthermore, it has been estimated that 15-20% of all cancer deaths may be attributable to obesity. Tumor growth is regulated by interactions between tumor cells and their tissue microenvironment. In this sense, obesity may lead to cancer development through dysfunctional adipose tissue and altered signaling pathways. In this review, three main pathways relating obesity and cancer development are examined: i inflammatory changes leading to macrophage polarization and altered adipokine profile; ii insulin resistance development; and iii adipose tissue hypoxia. Since obesity and cancer present a high prevalence, the association between these conditions is of great public health significance and studies showing mechanisms by which obesity lead to cancer development and progression are needed to improve prevention and management of these diseases.

  6. Heterotrisomy recurrence risk: a practical maternal age-dependent approach for excess trisomy 21 risk calculation after a previous autosomal trisomy.

    Science.gov (United States)

    Grande, Maribel; Stergiotou, Iosifina; Borobio, Virginia; Sabrià, Joan; Soler, Anna; Borrell, Antoni

    2017-07-01

    A new maternal age-dependent method to estimate absolute excess risks of trisomy 21, either after a previous trisomy 21 (homotrisomy) or after another trisomy (heterotrisomy), is proposed to be added to the estimated risk by conventional screening methods. Excess risk at term for a subsequent trisomy 21 was calculated from midtrimester risks reported by Morris et al., decreasing from 0.49% at 20 years to 0.01% at 46 years at the index pregnancy. Excess risk after a previous uncommon trisomy was derived from data reported by Warburton et al., decreasing from 0.37% at 20 years to 0.01% at 50 years.

  7. Cardioprotective aspirin users and their excess risk of upper gastrointestinal complications.

    Science.gov (United States)

    Hernández-Díaz, Sonia; García Rodríguez, Luis A

    2006-09-20

    To balance the cardiovascular benefits from low-dose aspirin against the gastrointestinal harm caused, studies have considered the coronary heart disease risk for each individual but not their gastrointestinal risk profile. We characterized the gastrointestinal risk profile of low-dose aspirin users in real clinical practice, and estimated the excess risk of upper gastrointestinal complications attributable to aspirin among patients with different gastrointestinal risk profiles. To characterize aspirin users in terms of major gastrointestinal risk factors (i.e., advanced age, male sex, prior ulcer history and use of non-steroidal anti-inflammatory drugs), we used The General Practice Research Database in the United Kingdom and the Base de Datos para la Investigación Farmacoepidemiológica en Atención Primaria in Spain. To estimate the baseline risk of upper gastrointestinal complications according to major gastrointestinal risk factors and the excess risk attributable to aspirin within levels of these factors, we used previously published meta-analyses on both absolute and relative risks of upper gastrointestinal complications. Over 60% of aspirin users are above 60 years of age, 4 to 6% have a recent history of peptic ulcers and over 13% use other non-steroidal anti-inflammatory drugs. The estimated average excess risk of upper gastrointestinal complications attributable to aspirin is around 5 extra cases per 1,000 aspirin users per year. However, the excess risk varies in parallel to the underlying gastrointestinal risk and might be above 10 extra cases per 1,000 person-years in over 10% of aspirin users. In addition to the cardiovascular risk, the underlying gastrointestinal risk factors have to be considered when balancing harms and benefits of aspirin use for an individual patient. The gastrointestinal harms may offset the cardiovascular benefits in certain groups of patients where the gastrointestinal risk is high and the cardiovascular risk is low.

  8. Advances in Derivative-Free State Estimation for Nonlinear Systems

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Poulsen, Niels Kjølstad; Ravn, Ole

    In this paper we show that it involves considerable advantages to use polynomial approximations obtained with an interpolation formula for derivation of state estimators for nonlinear systems. The estimators become more accurate than estimators based on Taylor approximations, and yet...

  9. Laser photogrammetry improves size and demographic estimates for whale sharks

    National Research Council Canada - National Science Library

    Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J

    2015-01-01

    .... We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters...

  10. Warm Dust around Cool Stars: Field M Dwarfs with WISE 12 or 22 Micron Excess Emission

    CERN Document Server

    Theissen, Christopher A

    2014-01-01

    Using the SDSS DR7 spectroscopic catalog, we searched the WISE AllWISE catalog to investigate the occurrence of warm dust, as inferred from IR excesses, around field M dwarfs (dMs). We developed SDSS/WISE color selection criteria to identify 175 dMs (from 70,841) that show IR flux greater than typical dM photosphere levels at 12 and/or 22 $\\mu$m, including seven new stars within the Orion OB1 footprint. We characterize the dust populations inferred from each IR excess, and investigate the possibility that these excesses could arise from ultracool binary companions by modeling combined SEDs. Our observed IR fluxes are greater than levels expected from ultracool companions ($>3\\sigma$). We also estimate that the probability the observed IR excesses are due to chance alignments with extragalactic sources is $<$ 0.1%. Using SDSS spectra we measure surface gravity dependent features (K, Na, and CaH 3), and find $<$ 15% of our sample indicate low surface gravities. Examining tracers of youth (H$\\alpha$, UV fl...

  11. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  12. Which estimation is more accurate? A technical comments on Nature Paper by Liu et al on overestimation of China's emission%谁的估计更准确?评论Nature发表的中国CO2排放重估的论文

    Institute of Scientific and Technical Information of China (English)

    滕飞; 朱松丽

    2015-01-01

    从温室气体清单估计的方法、数据及不确定性等几个方面,对刘竹等2015年8月发表在Nature上的论文“Reduced carbon emission estimates from fossil fuel combustion and cement production in China”的主要结论及观点进行了分析,指出了该文在计算与比较中的错误,因而该文有关中国国家温室气体清单高估中国排放的结论并不成立.

  13. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  14. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  15. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  17. 3D maps of the local ISM from inversion of individual color excess measurements

    CERN Document Server

    Lallement, Rosine; Valette, Bernard; Puspitarini, Lucky; Eyer, Laurent; Casagrande, Luca

    2013-01-01

    Three-dimensional (3D) maps of the Galactic interstellar matter (ISM) are a potential tool of wide use, however accurate and detailed maps are still lacking. One of the ways to construct the maps is to invert individual distance-limited ISM measurements, a method we have here applied to measurements of stellar color excess in the optical. We have assembled color excess data together with the associated parallax or photometric distances to constitute a catalog of ~ 23,000 sightlines for stars within 2.5 kpc. The photometric data are taken from Stromgren catalogs, the Geneva photometric database, and the Geneva-Copenhagen survey. We also included extinctions derived towards open clusters. We applied, to this color excess dataset, an inversion method based on a regularized Bayesian approach, previously used for mapping at closer distances. We show the dust spatial distribution resulting from the inversion by means of planar cuts through the differential opacity 3D distribution, and by means of 2D maps of the int...

  18. Diagnostic accuracy of the defining characteristics of the excessive fluid volume diagnosis in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Maria Isabel da Conceição Dias Fernandes

    2015-12-01

    Full Text Available Objective: to evaluate the accuracy of the defining characteristics of the excess fluid volume nursing diagnosis of NANDA International, in patients undergoing hemodialysis. Method: this was a study of diagnostic accuracy, with a cross-sectional design, performed in two stages. The first, involving 100 patients from a dialysis clinic and a university hospital in northeastern Brazil, investigated the presence and absence of the defining characteristics of excess fluid volume. In the second step, these characteristics were evaluated by diagnostic nurses, who judged the presence or absence of the diagnosis. To analyze the measures of accuracy, sensitivity, specificity, and positive and negative predictive values were calculated. Approval was given by the Research Ethics Committee under authorization No. 148.428. Results: the most sensitive indicator was edema and most specific were pulmonary congestion, adventitious breath sounds and restlessness. Conclusion: the more accurate defining characteristics, considered valid for the diagnostic inference of excess fluid volume in patients undergoing hemodialysis were edema, pulmonary congestion, adventitious breath sounds and restlessness. Thus, in the presence of these, the nurse may safely assume the presence of the diagnosis studied.

  19. The e-index, complementing the h-index for excess citations.

    Directory of Open Access Journals (Sweden)

    Chun-Ting Zhang

    Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.

  20. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    OpenAIRE

    Zhanshan Wang; Longhu Quan; Xiuchong Liu

    2014-01-01

    The control of a high performance alternative current (AC) motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI) controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM). In order to guarantee the accuracy of rot...

  1. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  2. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  3. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  4. Accurate taxonomic assignment of short pyrosequencing reads.

    Science.gov (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  5. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  6. Hydration of proteins: excess partial enthalpies of water and proteins.

    Science.gov (United States)

    Sirotkin, Vladimir A; Khadiullina, Aigul V

    2011-12-22

    Isothermal batch calorimetry was applied to study the hydration of proteins. The hydration process was analyzed by the simultaneous monitoring of the excess partial enthalpies of water and the proteins in the entire range of water content. Four unrelated proteins (lysozyme, chymotrypsinogen A, human serum albumin, and β-lactoglobulin) were used as models. The excess partial quantities are very sensitive to the changes in the state of water and proteins. At the lowest water weight fractions (w(1)), the changes of the excess thermochemical functions can mainly be attributed to water addition. A transition from the glassy to the flexible state of the proteins is accompanied by significant changes in the excess partial quantities of water and the proteins. This transition appears at a water weight fraction of 0.06 when charged groups of proteins are covered. Excess partial quantities reach their fully hydrated values at w(1) > 0.5 when coverage of both polar and weakly interacting surface elements is complete. At the highest water contents, water addition has no significant effect on the excess thermochemical quantities. At w(1) > 0.5, changes in the excess functions can solely be attributed to changes in the state of the proteins.

  7. Prevalence of excessive screen time and associated factors in adolescents

    Science.gov (United States)

    de Lucena, Joana Marcela Sales; Cheng, Luanna Alexandra; Cavalcante, Thaísa Leite Mafaldo; da Silva, Vanessa Araújo; de Farias, José Cazuza

    2015-01-01

    Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female) from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color), physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1) and it was higher in males (84.3%) compared to females (76.1%; p<0.001). In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure. PMID:26298661

  8. Prevalence of excessive screen time and associated factors in adolescents

    Directory of Open Access Journals (Sweden)

    Joana Marcela Sales de Lucena

    2015-12-01

    Full Text Available Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color, physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1 and it was higher in males (84.3% compared to females (76.1%; p<0.001. In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure.

  9. Excess of (236)U in the northwest Mediterranean Sea.

    Science.gov (United States)

    Chamizo, E; López-Lora, M; Bressac, M; Levy, I; Pham, M K

    2016-09-15

    In this work, we present first (236)U results in the northwestern Mediterranean. (236)U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25'N, 07°52'E). The obtained (236)U/(238)U atom ratios in the dissolved phase, ranging from about 2×10(-9) at 100m depth to about 1.5×10(-9) at 2350m depth, indicate that anthropogenic (236)U dominates the whole seawater column. The corresponding deep-water column inventory (12.6ng/m(2) or 32.1×10(12) atoms/m(2)) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5ng/m(2) or 13×10(12) atoms/m(2)), evidencing the influence of local or regional (236)U sources in the western Mediterranean basin. On the other hand, the input of (236)U associated to Saharan dust outbreaks is evaluated. An additional (236)U annual deposition of about 0.2pg/m(2) based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that (236)U stays in solution in seawater. Overall, this source accounts for about 0.1% of the (236)U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin.

  10. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  11. A Bayesian Framework for Combining Valuation Estimates

    CERN Document Server

    Yee, Kenton K

    2007-01-01

    Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.

  12. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Science.gov (United States)

    Lee, Wen-Chung

    2014-01-01

    Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio) to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk). The excess relative risk (ERR) used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices) is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense) between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices) in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided) using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  13. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Directory of Open Access Journals (Sweden)

    Wen-Chung Lee

    Full Text Available Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk. The excess relative risk (ERR used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  14. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  15. Analysis of factors associated with excess weight in school children

    Directory of Open Access Journals (Sweden)

    Renata Paulino Pinto

    Full Text Available Abstract Objective: To determine the prevalence of overweight and obesity in schoolchildren aged 10 to 16 years and its association with dietary and behavioral factors. Methods: Cross-sectional study that evaluated 505 adolescents using a structured questionnaire and anthropometric data. The data was analyzed through the T Test for independent samples and Mann-Whitney Test to compare means and medians, respectively, and Chi2 Test for proportions. Prevalence ratio (RP and the 95% confidence interval was used to estimate the degree of association between variables. The logistic regression was employed to adjust the estimates to confounding factors. The significance level of 5% was considered for all analysis. Results: Excess weight was observed in 30.9% of the schoolchildren: 18.2% of overweight and 12.7% of obesity. There was no association between weight alterations and dietary/behavioral habits in the bivariate and multivariate analyses. However, associations were observed in relation to gender. Daily consumption of sweets [PR=0.75 (0.64-0.88] and soft drinks [PR=0.82 (0.70-0.97] was less frequent among boys; having lunch daily was slightly more often reported by boys [OR=1.11 (1.02-1.22]. Physical activity practice of (≥3 times/week was more often mentioned by boys and the association measures disclosed two-fold more physical activity in this group [PR=2.04 (1.56-2.67] when compared to girls. Approximately 30% of boys and 40% of girls stated they did not perform activities requiring energy expenditure during free periods, with boys being 32% less idle than girls [PR=0.68 (0.60-0.76]. Conclusions: A high prevalence of both overweight and obesity was observed, as well as unhealthy habits in the study population, regardless of the presence of weight alterations. Health promotion strategies in schools should be encouraged, in order to promote healthy habits and behaviors among all students.

  16. Analysis of factors associated with excess weight in school children.

    Science.gov (United States)

    Pinto, Renata Paulino; Nunes, Altacílio Aparecido; de Mello, Luane Marques

    2016-12-01

    To determine the prevalence of overweight and obesity in schoolchildren aged 10 to 16 years and its association with dietary and behavioral factors. Cross-sectional study that evaluated 505 adolescents using a structured questionnaire and anthropometric data. The data was analyzed through the T Test for independent samples and Mann-Whitney Test to compare means and medians, respectively, and Chi(2) Test for proportions. Prevalence Ratio (RP) and the 95% confidence interval was used to estimate the degree of association between variables. The logistic regression was employed to adjust the estimates to confounding factors. The significance level of 5% was considered for all analysis. Excess weight was observed in 30.9% of the schoolchildren: 18.2% of overweight and 12.7% of obesity. There was no association between weight alterations and dietary/behavioral habits in the bivariate and multivariate analyses. However, associations were observed in relation to gender. Daily consumption of sweets [PR=0.75 (0.64-0.88)] and soft drinks [PR=0.82 (0.70-0.97)] was less frequent among boys; having lunch daily was slightly more often reported by boys [OR=1.11 (1.02-1.22)]. Physical activity practice of (≥3 times/week) was more often mentioned by boys and the association measures disclosed two-fold more physical activity in this group [PR=2.04 (1.56-2.67)] when compared to girls. Approximately 30% of boys and 40% of girls stated they did not perform activities requiring energy expenditure during free periods, with boys being 32% less idle than girls [PR=0.68 (0.60-0.76)]. A high prevalence of both overweight and obesity was observed, as well as unhealthy habits in the study population, regardless of the presence of weight alterations. Health promotion strategies in schools should be encouraged, in order to promote healthy habits and behaviors among all students. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  17. GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity

    CERN Document Server

    Leonard, Adrienne; Starck, Jean-Luc

    2013-01-01

    We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...

  18. Fast and spectrally accurate summation of 2-periodic Stokes potentials

    CERN Document Server

    Lindbo, Dag

    2011-01-01

    We derive a Ewald decomposition for the Stokeslet in planar periodicity and a novel PME-type O(N log N) method for the fast evaluation of the resulting sums. The decomposition is the natural 2P counterpart to the classical 3P decomposition by Hasimoto, and is given in an explicit form not found in the literature. Truncation error estimates are provided to aid in selecting parameters. The fast, PME-type, method appears to be the first fast method for computing Stokeslet Ewald sums in planar periodicity, and has three attractive properties: it is spectrally accurate; it uses the minimal amount of memory that a gridded Ewald method can use; and provides clarity regarding numerical errors and how to choose parameters. Analytical and numerical results are give to support this. We explore the practicalities of the proposed method, and survey the computational issues involved in applying it to 2-periodic boundary integral Stokes problems.

  19. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  20. Streptomycin interference in Jaffe reaction - Possible false positive creatinine estimation in excessive dose exposure

    DEFF Research Database (Denmark)

    Syal, Kirtimaan; Srinivasan, Anand; Banerjee, Dibyajyoti

    2013-01-01

    Objectives: To study the potential of commonly used aminoglycoside antibiotics to form non-creatinine chromogen with alkaline picrate reagent. Design and methods: We studied the non-creatinine chromogen formation of various concentrations of streptomycin, amikacin, kanamycin, netilmicin, gentamicin...