WorldWideScience

Sample records for accurately estimate excess

  1. Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches

    Directory of Open Access Journals (Sweden)

    Friedrich Jan O

    2009-01-01

    , although far from statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated.

  2. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...... are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup...

  3. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  4. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  5. Binomial Distribution Sample Confidence Intervals Estimation 6. Excess Risk

    Directory of Open Access Journals (Sweden)

    Sorana BOLBOACĂ

    2004-02-01

    Full Text Available We present the problem of the confidence interval estimation for excess risk (Y/n-X/m fraction, a parameter which allows evaluating of the specificity of an association between predisposing or causal factors and disease in medical studies. The parameter is computes based on 2x2 contingency table and qualitative variables. The aim of this paper is to introduce four new methods of computing confidence intervals for excess risk called DAC, DAs, DAsC, DBinomial, and DBinomialC and to compare theirs performance with the asymptotic method called here DWald.In order to assess the methods, we use the PHP programming language and a PHP program was creates. The performance of each method for different sample sizes and different values of binomial variables were assess using a set of criterions. First, the upper and lower boundaries for a given X, Y and a specified sample size for choused methods were compute. Second, were assessed the average and standard deviation of the experimental errors, and the deviation relative to imposed significance level α = 5%. Four methods were assessed on random numbers for binomial variables and for sample sizes from 4 to 1000 domain.The experiments show that the DAC methods obtain performances in confidence intervals estimation for excess risk.

  6. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  7. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  8. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  9. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  10. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  11. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  12. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Science.gov (United States)

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  13. Accurate estimation of the boundaries of a structured light pattern.

    Science.gov (United States)

    Lee, Sukhan; Bui, Lam Quang

    2011-06-01

    Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture-induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.

  14. Towards accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2016-12-13

    Reliable estimates of animal density are fundamental to our understanding of ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation biology since wildlife authorities rely on these figures to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging species such as carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores. African lions (Panthera leo) provide an excellent example as although abundance indices have been shown to produce poor inferences, they continue to be used to estimate lion density and inform management and policy. In this study we adapt a Bayesian spatially explicit capture-recapture model to estimate lion density in the Maasai Mara National Reserve (MMNR) and surrounding conservancies in Kenya. We utilize sightings data from a three-month survey period to produce statistically rigorous spatial density estimates. Overall posterior mean lion density was estimated to be 16.85 (posterior standard deviation = 1.30) lions over one year of age per 100km(2) with a sex ratio of 2.2♀:1♂. We argue that such methods should be developed, improved and favored over less reliable methods such as track and call-up surveys. We caution against trend analyses based on surveys of differing reliability and call for a unified framework to assess lion numbers across their range in order for better informed management and policy decisions to be made. This article is protected by copyright. All rights reserved.

  15. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  16. Accurate estimators of correlation functions in Fourier space

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  17. Guidelines for accurate EC50/IC50 estimation.

    Science.gov (United States)

    Sebaugh, J L

    2011-01-01

    This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.

  18. Efficient floating diffuse functions for accurate characterization of the surface-bound excess electrons in water cluster anions.

    Science.gov (United States)

    Zhang, Changzhe; Bu, Yuxiang

    2017-01-25

    In this work, the effect of diffuse function types (atom-centered diffuse functions versus floating functions and s-type versus p-type diffuse functions) on the structures and properties of three representative water cluster anions featuring a surface-bound excess electron is studied and we find that an effective combination of such two kinds of diffuse functions can not only reduce the computational cost but also, most importantly, considerably improve the accuracy of results and even avoid incorrect predictions of spectra and the EE shape. Our results indicate that (a) simple augmentation of atom-centered diffuse functions is beneficial for the vertical detachment energy convergence, but it leads to very poor descriptions for the singly occupied molecular orbital (SOMO) and lowest unoccupied molecular orbital (LUMO) distributions of the water cluster anions featuring a surface-bound excess electron and thus a significant ultraviolet spectrum redshift; (b) the ghost-atom-based floating diffuse functions can not only contribute to accurate electronic calculations of the ground state but also avoid poor and even incorrect descriptions of the SOMO and the LUMO induced by excessive augmentation of atom-centered diffuse functions; (c) the floating functions can be realized by ghost atoms and their positions could be determined through an optimization routine along the dipole moment vector direction. In addition, both the s- and p-type floating functions are necessary to supplement in the basis set which are responsible for the ground (s-type character) and excited (p-type character) states of the surface-bound excess electron, respectively. The exponents of the diffuse functions should also be determined to make the diffuse functions cover the main region of the excess electron distribution. Note that excessive augmentation of such diffuse functions is redundant and even can lead to unreasonable LUMO characteristics.

  19. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  20. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  1. Using inpainting to construct accurate cut-sky CMB estimators

    CERN Document Server

    Gruetjen, H F; Liguori, M; Shellard, E P S

    2015-01-01

    The direct evaluation of manifestly optimal, cut-sky CMB power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodised pseudo-$C_l$ (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodisation and a novel low-l leaning scheme. Providing an analytic argument why the loca...

  2. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  3. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  4. Simulation model accurately estimates total dietary iodine intake

    NARCIS (Netherlands)

    Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.

    2009-01-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p

  5. Accurate estimation of solvation free energy using polynomial fitting techniques.

    Science.gov (United States)

    Shyu, Conrad; Ytreberg, F Marty

    2011-01-15

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem, 2009, 30, 2297). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and nonequidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest that these polynomial techniques, especially with use of nonequidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software.

  6. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  7. Accurate estimates of solutions of second order recursions

    NARCIS (Netherlands)

    Mattheij, R.M.M.

    1975-01-01

    Two important types of two dimensional matrix-vector and second order scalar recursions are studied. Both types possess two kinds of solutions (to be called forward and backward dominant solutions). For the directions of these solutions sharp estimates are derived, from which the solutions themselve

  8. Accurate Estimators of Correlation Functions in Fourier Space

    CERN Document Server

    Sefusatti, Emiliano; Scoccimarro, Roman; Couchman, Hugh

    2015-01-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on Fast Fourier Transforms (FFTs), which are affected by aliasing from unresolved small scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per-cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher-order interpolation kernels than the standard Cloud in Cell a...

  9. How accurate are the time delay estimates in gravitational lensing?

    CERN Document Server

    Cuevas-Tello, J C; Tino, P; Cuevas-Tello, Juan C.; Raychaudhury, Somak; Tino, Peter

    2006-01-01

    We present a novel approach to estimate the time delay between light curves of multiple images in a gravitationally lensed system, based on Kernel methods in the context of machine learning. We perform various experiments with artificially generated irregularly-sampled data sets to study the effect of the various levels of noise and the presence of gaps of various size in the monitoring data. We compare the performance of our method with various other popular methods of estimating the time delay and conclude, from experiments with artificial data, that our method is least vulnerable to missing data and irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we use our method to determine the time delays between the two images of quasar Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if only the observations at epochs common to both wavelengths are used, the time delay gives consistent estimates, which can be combined to yield 408\\pm 12 days. The full 6 cm dataset, ...

  10. Simulation model accurately estimates total dietary iodine intake.

    Science.gov (United States)

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  11. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  12. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  13. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  14. An Accurate Approach to Large-Scale IP Traffic Matrix Estimation

    Science.gov (United States)

    Jiang, Dingde; Hu, Guangmin

    This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.

  15. Further result in the fast and accurate estimation of single frequency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new fast and accurate method for estimating the frequency of a complex sinusoid in complex white Gaussian environments is proposed.The new estimator comprises of applications of low-pass filtering,decimation, and frequency estimation by linear prediction.It is computationally efficient yet obtains the Cramer-Rao bound at moderate signal-to-noise ratios.And it is well suited for real time applications requiring precise frequency estimation.Simulation results are included to demonstrate the performance of the proposed method.

  16. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    Science.gov (United States)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  17. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  18. Simple, Fast and Accurate Photometric Estimation of Specific Star Formation Rate

    CERN Document Server

    Stensbo-Smidt, Kristoffer; Igel, Christian; Zirm, Andrew; Pedersen, Kim Steenstrup

    2015-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study considers the task of estimating the specific star formation rate (sSFR) of galaxies. It is shown that a nearest neighbours algorithm can produce better sSFR estimates than traditional SED fitting. We show that we can obtain accurate estimates of the sSFR even at high redshifts using only broad-band photometry based on the u, g, r, i and z filters from Sloan Digital Sky Survey (SDSS). We addtional...

  19. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  20. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  1. Accurately Estimating the State of a Geophysical System with Sparse Observations: Predicting the Weather

    CERN Document Server

    An, Zhe; Abarbanel, Henry D I

    2014-01-01

    Utilizing the information in observations of a complex system to make accurate predictions through a quantitative model when observations are completed at time $T$, requires an accurate estimate of the full state of the model at time $T$. When the number of measurements $L$ at each observation time within the observation window is larger than a sufficient minimum value $L_s$, the impediments in the estimation procedure are removed. As the number of available observations is typically such that $L \\ll L_s$, additional information from the observations must be presented to the model. We show how, using the time delays of the measurements at each observation time, one can augment the information transferred from the data to the model, removing the impediments to accurate estimation and permitting dependable prediction. We do this in a core geophysical fluid dynamics model, the shallow water equations, at the heart of numerical weather prediction. The method is quite general, however, and can be utilized in the a...

  2. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  3. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    Science.gov (United States)

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  4. An Accurate Method for the BDS Receiver DCB Estimation in a Regional Network

    Directory of Open Access Journals (Sweden)

    LI Xin

    2016-08-01

    Full Text Available An accurate approach for receiver differential code biases (DCB estimation is proposed with the BDS data obtained from a regional tracking network. In contrast to the conventional methods for BDS receiver DCB estimation, the proposed method does not require a complicated ionosphere model, as long as one reference station receiver DCB is known. The main idea for this method is that the ionosphere delay is highly dependent on the geometric ranges between the BDS satellite and the receiver normally. Therefore, the non-reference station receivers DCBs in this regional area can be estimated using single difference (SD with reference stations. The numerical results show that the RMS of these estimated BDS receivers DCBs errors over 30 days are about 0.3 ns. Additionally, after deduction of these estimated receivers DCBs and knowing satellites DCBs, the extractive diurnal VTEC showed a good agreement with the diurnal VTEC gained from the GIM interpolation, indicating the reliability of the estimated receivers DCBs.

  5. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Directory of Open Access Journals (Sweden)

    Saeed Sepasi

    2015-06-01

    Full Text Available As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs, hybrid electric vehicles (HEVs and smart grids. In these applications, the battery management system (BMS requires an accurate online estimation of the state of charge (SOC in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes SOC estimation of Li-ion battery packs using a fuzzy-improved extended Kalman filter (fuzzy-IEKF for Li-ion cells, regardless of their age. The proposed approach introduces a fuzzy method with a new class and associated membership function that determines an approximate initial value applied to SOC estimation. Subsequently, the EKF method is used by considering the single unit model for the battery pack to estimate the SOC for following periods of battery use. This approach uses an adaptive model algorithm to update the model for each single cell in the battery pack. To verify the accuracy of the estimation method, tests are done on a LiFePO4 aged battery pack consisting of 120 cells connected in series with a nominal voltage of 432 V.

  6. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    Science.gov (United States)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  7. Interpolated-DFT-Based Fast and Accurate Amplitude and Phase Estimation for the Control of Power

    Directory of Open Access Journals (Sweden)

    Borkowski Józef

    2016-03-01

    Full Text Available Quality of energy produced in renewable energy systems has to be at the high level specified by respective standards and directives. One of the most important factors affecting quality is the estimation accuracy of grid signal parameters. This paper presents a method of a very fast and accurate amplitude and phase grid signal estimation using the Fast Fourier Transform procedure and maximum decay side-lobes windows. The most important features of the method are elimination of the impact associated with the conjugate’s component on the results and its straightforward implementation. Moreover, the measurement time is very short ‒ even far less than one period of the grid signal. The influence of harmonics on the results is reduced by using a bandpass pre-filter. Even using a 40 dB FIR pre-filter for the grid signal with THD ≈ 38%, SNR ≈ 53 dB and a 20‒30% slow decay exponential drift the maximum estimation errors in a real-time DSP system for 512 samples are approximately 1% for the amplitude and approximately 8.5・10‒2 rad for the phase, respectively. The errors are smaller by several orders of magnitude with using more accurate pre-filters.

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  10. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Directory of Open Access Journals (Sweden)

    Hu Yongxiang

    2016-01-01

    On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL data that are collocated with in-water optical measurements.

  11. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    Energy Technology Data Exchange (ETDEWEB)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P. [Department of Mechanical Engineering, Imperial College, London, SW7 2AZ (United Kingdom)

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  12. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  13. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  14. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  15. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  16. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  17. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  18. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  19. The potential of more accurate InSAR covariance matrix estimation for land cover mapping

    Science.gov (United States)

    Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin

    2017-04-01

    Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.

  20. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Science.gov (United States)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  1. mBEEF: An accurate semi-local Bayesian error estimation density functional

    Science.gov (United States)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  2. Robust and Accurate Multiple-camera Pose Estimation Toward Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-09-01

    Full Text Available Pose estimation methods in robotics applications frequently suffer from inaccuracy due to a lack of correspondence and real-time constraints, and instability from a wide range of viewpoints, etc. In this paper, we present a novel approach for estimating the poses of all the cameras in a multi-camera system in which each camera is placed rigidly using only a few coplanar points simultaneously. Instead of solving the orientation and translation for the multi-camera system from the overlapping point correspondences among all the cameras directly, we employ homography, which can map image points with 3D coplanar-referenced points. In our method, we first establish the corresponding relations between each camera by their Euclidean geometries and optimize the homographies of the cameras; then, we solve the orientation and translation for the optimal homographies. The results from simulations and real case experiments show that our approach is accurate and robust for implementation in robotics applications. Finally, a practical implementation in a ping-pong robot is described in order to confirm the validity of our approach.

  3. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  4. Exclusion of measurements with excessive residuals (blunders) in estimating model parameters

    CERN Document Server

    Nikiforov, I I

    2013-01-01

    An adjustable algorithm of exclusion of conditional equations with excessive residuals is proposed. The criteria applied in the algorithm use variable exclusion limits which decrease as the number of equations goes down. The algorithm is easy to use, it possesses rapid convergence, minimal subjectivity, and high degree of generality.

  5. Accurate optical flow field estimation using mechanical properties of soft tissues

    Science.gov (United States)

    Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas

    2009-02-01

    A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.

  6. Optimization of Correlation Kernel Size for Accurate Estimation of Myocardial Contraction and Relaxation

    Science.gov (United States)

    Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.

  7. Estimation of physical activity and prevalence of excessive body mass in rural and urban Polish adolescents.

    Science.gov (United States)

    Hoffmann, Karolina; Bryl, Wiesław; Marcinkowski, Jerzy T; Strażyńska, Agata; Pupek-Musialik, Danuta

    2011-01-01

    Excessive body mass and sedentary lifestyle are well-known factors for cardiovascular risk, which when present in the young population may have significant health consequences, both in the short- and long-term. The aim of the study was to evaluate the prevalence of overweight, obesity, and sedentary lifestyle in two teenage populations living in an urban or rural area. An additional aim was to compare their physical activity. The study was designed and conducted in 2009. The study population consisted of 116 students aged 15-17 years - 61 males (52.7%) and 55 females (47.3%), randomly selected from public junior grammar schools and secondary schools in the Poznań Region. There were 61 respondents from a rural area - 32 males (52.5%) and 29 females (47.5%), whereas 55 teenagers lived in an urban area - 29 males (47.5%) and 26 females (47.3%). Students were asked to complete a questionnaire, which was especially prepared for the study and contained questions concerning health and lifestyle. A basic physical examination was carried out in all 116 students, including measurements of the anthropometric features. Calculations were performed using the statistical package STATISTICA (data analysis software system), Version. 8.0. When comparing these two populations, no statistically significant differences were detected in the ratio of weight-growth, with the exception of the fact that the urban youths had a larger hip circumference (97.1 v. 94.3 cm, p0.05), the problem of excessive weight affected both sexes in a similar proportion (25% boys and 24.1% girls, p>0.05). In this paper it is shown that there were differences concerning physical activity of teenagers living in urban and rural areas. Urban students much more often declared an active lifestyle (72.7% v.42.6%, pactivity (not counting compulsory physical education classes).

  8. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    Directory of Open Access Journals (Sweden)

    Eliane Lopes Rosado

    2014-03-01

    Full Text Available Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE, compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect calorimetry with respiratory hood. Results: In G1 and G2, it was found that the estimates obtained by Harris-Benedict, Shofield, FAO/WHO/ ONU and Henry & Rees did not differ from EE using indirect calorimetry, which presented higher values than the equations proposed by Owen, Mifflin-St Jeor and Oxford. For G1 and G2 the predictive equation closest to the value obtained by the indirect calorimetry was the FAO/WHO/ONU (7.9% and 0.46% underestimation, respectively, followed by Harris-Benedict (8.6% and 1.5% underestimation, respectively. Conclusion: The equations proposed by FAO/WHO/ ONU, Harris-Benedict, Shofield and Henry & Rees were adequate to estimate the EE in a sample of Brazilian and Spanish women with excess body weight. The other equations underestimated the EE.

  9. Analytical estimation of control rod shadowing effect for excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR)

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Masaaki; Yamashita, Kiyonobu; Fujimoto, Nozomu; Nojiri, Naoki; Takeuchi, Mitsuo; Fujisaki, Shingo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Tokuhara, Kazumi; Nakata, Tetsuo

    1998-05-01

    The control rod shadowing effect has been estimated analytically in application of the fuel addition method to excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR). The movements of control rods in the procedure of the fuel addition method have been simulated in the analysis. The calculated excess reactivity obtained by the simulation depends on the combinations of measuring control rods and compensating control rods and varies from -10% to +50% in comparison with the excess reactivity calculated from the effective multiplication factor of the core where all control rods are fully withdrawn. The control rod shadowing effect is reduced by the use of plural number of measuring and compensation control rods because of the reduction in neutron flux deformation in the measuring procedure. As a result, following combinations of control rods are recommended; 1) Thirteen control rods of the center, first, and second rings will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other twelve control rods for reactivity compensation. 2) Six control rods of the first ring will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other five control rods for reactivity compensation. (author)

  10. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    Science.gov (United States)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  11. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Directory of Open Access Journals (Sweden)

    Li Jing

    2014-08-01

    Full Text Available This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL system. An approach that combines angle with time difference of arrival (ATDOA is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS technique is applied in this approach. It achieves the Cramer–Rao lower bound (CRLB when the parameter measurements are subject to small Gaussian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  12. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Institute of Scientific and Technical Information of China (English)

    Li Jing; Zhao Yongjun; Li Donghai

    2014-01-01

    This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL) system. An approach that combines angle with time difference of arri-val (ATDOA) is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS) technique is applied in this approach. It achieves the Cramer-Rao lower bound (CRLB) when the parameter measurements are subject to small Gauss-ian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  13. Accurate performance estimators for information retrieval based on span bound of support vector machines

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Support vector machines have met with significant success in the information retrieval field, especially in handling text classification tasks. Although various performance estimators for SVMs have been proposed,these only focus on accuracy which is based on the leave-one-out cross validation procedure. Information-retrieval-related performance measures are always neglected in a kernel learning methodology. In this paper, we have proposed a set of information-retrieval-oriented performance estimators for SVMs, which are based on the span bound of the leave-one-out procedure. Experiments have proven that our proposed estimators are both effective and stable.

  14. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    Science.gov (United States)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  15. The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.

    Science.gov (United States)

    Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe

    2013-07-01

    There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.

  16. Comparing the standards of one metabolic equivalent of task in accurately estimating physical activity energy expenditure based on acceleration.

    Science.gov (United States)

    Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young

    2016-08-24

    The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O2 · kg(-1) · min(-1) and individually measured resting VO2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O2 · kg(-1) · min(-1). Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.

  17. Accurate kinetic parameter estimation during progress curve analysis of systems with endogenous substrate production.

    Science.gov (United States)

    Goudar, Chetan T

    2011-10-01

    We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.

  18. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  19. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  20. A Void Reference Sensor-Multiple Signal Classification Algorithm for More Accurate Direction of Arrival Estimation of Low Altitude Target

    Institute of Scientific and Technical Information of China (English)

    XIAO Hui; SUN Jin-cai; YUAN Jun; NIU Yi-long

    2007-01-01

    There exists MUSIC (multiple signal classification) algorithm for direction of arrival (DOA) estimation. This paper is to present a different MUSIC algorithm for more accurate estimation of low altitude target. The possibility of better performance is analyzed using a void reference sensor (VRS) in MUSIC algorithm. The following two topics are discussed: 1) the time delay formula and VRS-MUSIC algorithm with VRS located on the minus of z-axes; 2) the DOA estimation results of VRS-MUSIC and MUSIC algorithms. The simulation results show VRS-MUSIC algorithm has three advantages compared with MUSIC: 1 ) When the signal to noise ratio (SNR) is more than - 5 dB, the direction estimation error is 1/2 as much as that obtained by MUSIC; 2) The side lobe is more lower and the stability is better; 3) The size of array that the algorithm requires is smaller.

  1. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  2. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer;

    2016-01-01

    angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo...

  3. Do wavelet filters provide more accurate estimates of reverberation times at low frequencies

    DEFF Research Database (Denmark)

    Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.

    2016-01-01

    the continuous wavelet transform (CTW) has been implemented using a Morlet mother function. Although in general, the wavelet filter bank performs better than the usual filters, the influence of decaying modes outside the filter bandwidth on the measurements has been detected, leading to a biased estimation...

  4. Fast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data

    Directory of Open Access Journals (Sweden)

    Iliadis Alexandros

    2012-10-01

    Full Text Available Abstract Background Typically, the first phase of a genome wide association study (GWAS includes genotyping across hundreds of individuals and validation of the most significant SNPs. Allelotyping of pooled genomic DNA is a common approach to reduce the overall cost of the study. Knowledge of haplotype structure can provide additional information to single locus analyses. Several methods have been proposed for estimating haplotype frequencies in a population from pooled DNA data. Results We introduce a technique for haplotype frequency estimation in a population from pooled DNA samples focusing on datasets containing a small number of individuals per pool (2 or 3 individuals and a large number of markers. We compare our method with the publicly available state-of-the-art algorithms HIPPO and HAPLOPOOL on datasets of varying number of pools and marker sizes. We demonstrate that our algorithm provides improvements in terms of accuracy and computational time over competing methods for large number of markers while demonstrating comparable performance for smaller marker sizes. Our method is implemented in the "Tree-Based Deterministic Sampling Pool" (TDSPool package which is available for download at http://www.ee.columbia.edu/~anastas/tdspool. Conclusions Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.

  5. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  6. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes...

  7. Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo

    Science.gov (United States)

    Zhang, Chao; Zhou, Fugen; Xue, Bindang

    2017-02-01

    In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.

  8. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Directory of Open Access Journals (Sweden)

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  9. Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance

    CERN Document Server

    Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2016-01-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...

  10. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research

    Directory of Open Access Journals (Sweden)

    Miguel Angel Luque-Fernandez

    2016-10-01

    Full Text Available Abstract Background In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean. Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. Methods We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. Results All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001. However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3 for non-flexible piecewise exponential models. Conclusion We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  11. Rapid and Accurate Estimates of Alloy Phase Diagrams for Design and Assessment

    Science.gov (United States)

    Tan, Teck; Johnson, Duane

    2009-03-01

    Based on first-principles cluster expansion (CE), we obtain rapid but accurate assessments of alloy T vs c phase diagrams from a mean-field theory that conserves sum rules over pair correlations. Such conserving mean-field theories are less complicated than the popular cluster variation method, and better reproduce the Monte Carlo (MC) phase boundaries and Tc for the nearest-neighbor Ising model [1]. The free-energy f(T,c) is a simple analytic expression and its value at fixed T or c is obtained by solving a set of n non-linear coupled equations, where n is determined by the number of sublattices in the groundstate structure and the range of pair correlations included. While MC is ``exact,'' conserving mean-field theories are 10 to 10^3 faster, allowing for rapid phase diagram construction, dramatically saving computation time. We have generalized the method to account for multibody interactions to enable phase diagram calculations via first-principles CE, and its accuracy is showed vis-à-vis exact MC for several alloy systems. The method is included in our Thermodynamic ToolKit (TTK), available for general use in 2009. [1] V. I. Tokar, Comput. Mater. Sci. 8 (1997), p.8

  12. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  13. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    Science.gov (United States)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  14. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  15. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    Science.gov (United States)

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  16. Accurate parameter estimation for star formation history in galaxies using SDSS spectra

    CERN Document Server

    Richards, Joseph W; Lee, Ann B; Schafer, Chad M

    2009-01-01

    To further our knowledge of the complex physical process of galaxy formation, it is essential that we characterize the formation and evolution of large databases of galaxies. The spectral synthesis STARLIGHT code of Cid Fernandes et al. (2004) was designed for this purpose. Results of STARLIGHT are highly dependent on the choice of input basis of simple stellar population (SSP) spectra. Speed of the code, which uses random walks through the parameter space, scales as the square of the number of basis spectra, making it computationally necessary to choose a small number of SSPs that are coarsely sampled in age and metallicity. In this paper, we develop methods based on diffusion map (Lafon & Lee, 2006) that, for the first time, choose appropriate bases of prototype SSP spectra from a large set of SSP spectra designed to approximate the continuous grid of age and metallicity of SSPs of which galaxies are truly composed. We show that our techniques achieve better accuracy of physical parameter estimation for...

  17. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    Science.gov (United States)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  18. Accurate and robust estimation of phase error and its uncertainty of 50 GHz bandwidth sampling circuit

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper discusses the dependence of the phase error on the 50 GHz bandwidth oscilloscope's sampling circuitry- We give the definition of the phase error as the difference between the impulse responses of the NTN (nose-to-nose) estimate and the true response of the sampling circuit. We develop a method to predict the NTN phase response arising from the internal sampling circuitry of the oscilloscope. For the default sampling-circuit configuration that we examine, our phase error is approximately 7.03 at 50 GHz. We study the sensitivity of the oscilloscope's phase response to parametric changes in sampling-circuit component values. We develop procedures to quantify the sensitivity of the phase error to each component and to a combination of components that depend on the fractional uncertainty in each of the model parameters as the same value, 10%. We predict the upper and lower bounds of phase error, that is, we vary all of the circuit parameters simultaneously in such a way as to increase the phase error, and then vary all of the circuit parameters to decrease the phase error. Based on Type B evaluation, this method qualifies the impresses of all parameters of the sampling circuit and gives the value of standard uncertainty, 1.34. This result is developed at the first time and has important practical uses. It can be used for phase calibration in the 50 GHz bandwidth large signal network analyzers (LSNAs).

  19. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    CERN Document Server

    Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V

    2015-01-01

    We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...

  20. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    Science.gov (United States)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  1. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  2. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    Science.gov (United States)

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  3. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  4. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  5. Highly accurate and efficient self-force computations using time-domain methods: Error estimates, validation, and optimization

    CERN Document Server

    Thornburg, Jonathan

    2010-01-01

    If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...

  6. Bisulfite-based epityping on pooled genomic DNA provides an accurate estimate of average group DNA methylation

    Directory of Open Access Journals (Sweden)

    Docherty Sophia J

    2009-03-01

    Full Text Available Abstract Background DNA methylation plays a vital role in normal cellular function, with aberrant methylation signatures being implicated in a growing number of human pathologies and complex human traits. Methods based on the modification of genomic DNA with sodium bisulfite are considered the 'gold-standard' for DNA methylation profiling on genomic DNA; however, they require relatively large amounts of DNA and may be prohibitively expensive when used on the large sample sizes necessary to detect small effects. We propose that a high-throughput DNA pooling approach will facilitate the use of emerging methylomic profiling techniques in large samples. Results Compared with data generated from 89 individual samples, our analysis of 205 CpG sites spanning nine independent regions of the genome demonstrates that DNA pools can be used to provide an accurate and reliable quantitative estimate of average group DNA methylation. Comparison of data generated from the pooled DNA samples with results averaged across the individual samples comprising each pool revealed highly significant correlations for individual CpG sites across all nine regions, with an average overall correlation across all regions and pools of 0.95 (95% bootstrapped confidence intervals: 0.94 to 0.96. Conclusion In this study we demonstrate the validity of using pooled DNA samples to accurately assess group DNA methylation averages. Such an approach can be readily applied to the assessment of disease phenotypes reducing the time, cost and amount of DNA starting material required for large-scale epigenetic analyses.

  7. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Directory of Open Access Journals (Sweden)

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  8. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    Directory of Open Access Journals (Sweden)

    M. Montes-Hugo

    2014-06-01

    Full Text Available The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor, EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM for estimating the phytoplankton absorption coefficient at 443 nm (aph(443 and the chlorophyll concentration (chl in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443 estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013. A change on the inversion approach used for estimating aph(443 values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System default values for the optical cross section of phytoplankton (i.e., aph(443 = aph(443/chl = 0.056 m2mg−1, the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443 retrievals and with respect to in situ determinations increased up to 29%.

  9. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Science.gov (United States)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  10. The absolute lymphocyte count accurately estimates CD4 counts in HIV-infected adults with virologic suppression and immune reconstitution

    Directory of Open Access Journals (Sweden)

    Barnaby Young

    2014-11-01

    Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24

  11. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  12. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    Science.gov (United States)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  13. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  14. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    Institute of Scientific and Technical Information of China (English)

    Zheng Wei; Hsu Hou-Tse; Zhong Min; Yun Mei-Juan

    2009-01-01

    Firstly,the new combined error model of cumulative geoid height influenced by four error sources,including the inter-satellite range-rate of an interferometric laser (K-band) ranging system,the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer,is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach.Secondly,the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985×10-1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL),and the cumulative geoid height error is 5.825×10-2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km.The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward,and the dependability of the combined error model is validated.Finally,the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes.

  15. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  16. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Science.gov (United States)

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.

  17. A relationship to estimate the excess entropy of mixing: Application in silicate solid solutions and binary alloys.

    Science.gov (United States)

    Benisek, Artur; Dachs, Edgar

    2012-06-25

    The paper presents new calorimetric data on the excess heat capacity and vibrational entropy of mixing of Pt-Rh and Ag-Pd alloys. The results of the latter alloy are compared to those obtained by calculations using the density functional theory. The extent of the excess vibrational entropy of mixing of these binaries and of some already investigated binary mixtures is related to the differences of the end-member volumes and the end-member bulk moduli. These quantities are used to roughly represent the changes of the bond length and stiffness in the substituted and substituent polyhedra due to compositional changes, which are assumed to be the important factors for the non-ideal vibrational behaviour in solid solutions.

  18. Excessive Daytime Sleepiness

    OpenAIRE

    Yavuz Selvi; Ali Kandeger; Ayca Asena Sayin

    2016-01-01

    Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with ...

  19. Excessive Daytime Sleepiness

    Directory of Open Access Journals (Sweden)

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  20. Excessive somnolence

    Directory of Open Access Journals (Sweden)

    Stella Tavares

    Full Text Available Excessive somnolence can be quite a incapacitating manifestation, and is frequently neglected by physicians and patients. This article reviews the determinant factors, the evaluation and quantification of diurnal somnolence, and the description and treatment of the main causes of excessive somnolence.

  1. On the Specification of the Gravity Model of Trade: Zeros, Excess Zeros and Zero-Inflated Estimation

    NARCIS (Netherlands)

    M.J. Burger (Martijn); F.G. van Oort (Frank); G.J.M. Linders (Gert-Jan)

    2009-01-01

    textabstractConventional studies of bilateral trade patterns specify a log-normal gravity equation for empirical estimation. However, the log-normal gravity equation suffers from three problems: the bias created by the logarithmic transformation, the failure of the homoscedasticity assumption, and t

  2. Estimating the economic value of ice climbing in Hyalite Canyon: An application of travel cost count data models that account for excess zeros.

    Science.gov (United States)

    Anderson, D Mark

    2010-01-01

    Recently, the sport of ice climbing has seen a dramatic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In addition to the novel outdoor recreation application, this study applies econometric methods designed to deal with "excess zeros" in the data. Depending upon model specification, per person per trip values are estimated to be in the range of $76 to $135.

  3. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Science.gov (United States)

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  4. Methodological extensions of meta-analysis with excess relative risk estimates: application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    Science.gov (United States)

    Doi, Kazutaka; Mieno, Makiko N.; Shimada, Yoshiya; Yonehara, Hidenori; Yoshinaga, Shinji

    2014-01-01

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95%CI: 0.30–1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1–2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. PMID:25037101

  5. A strategy for an accurate estimation of the basal permittivity in the Martian North Polar Layered Deposits

    CERN Document Server

    Lauro, S E; Pettinelli, E; Soldovieri, F; Cantini, F; Rossi, A P; Orosei, R

    2016-01-01

    The paper deals with the investigation of the Mars subsurface by means of data collected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding working at few MHz frequencies. A data processing strategy, which combines a simple inversion model and an accurate procedure for data selection is presented. This strategy permits to mitigate the theoretical and practical difficulties of the inverse problem arising because of the inaccurate knowledge of the parameters regarding both the scenario under investigation and the radiated electromagnetic field impinging on the Mars surface. The results presented in this paper show that, it is possible to reliably retrieve the electromagnetic properties of deeper structures, if such strategy is accurately applied. An example is given here, where the analysis of the data collected on Gemina Lingula, a region of the North Polar layer deposits, allowed us to retrieve permittivity values for the basal unit in agreement with those usually associated to the Earth basalt...

  6. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    Directory of Open Access Journals (Sweden)

    Xuebing Yuan

    2015-05-01

    Full Text Available Inertial navigation based on micro-electromechanical system (MEMS inertial measurement units (IMUs has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  7. 基于泛岭估计对岭估计过度压缩的改进方法%An Improved Method based Universal Ridge Estimate for the Excess Shrinkage of Ridge Estimate

    Institute of Scientific and Technical Information of China (English)

    刘文卿

    2011-01-01

    Ridge estimate is an effective method to solve the problem of multicollinearity in multiple linear regression, and is a biased shrinkage estimate. Against ordinary least squares estimate, ridge estimate decreases mean square errors, but increases residual sum of squares. This paper proposes an improved method based universal ridge estimate for the excess shrinkage of ridge estimate. Which method can make better the effect of fit, reduces the residual sum of squares contrast to ridge estimate.%岭估计是解决多元线性回归多重共线性问题的有效方法,是有偏的压缩估计。与普通最小二乘估计相比,岭估计可以降低参数估计的均方误差,但是却增大残差平方和,拟合效果变差。本文提出一种基于泛岭估计对岭估计过度压缩的改进方法,可以改进岭估计的拟合效果,减小岭估计残差平方和的增加幅度。

  8. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt

    Directory of Open Access Journals (Sweden)

    Kapil Yadav

    2015-01-01

    Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  9. HIV Excess Cancers JNCI

    Science.gov (United States)

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  10. How have ART treatment programmes changed the patterns of excess mortality in people living with HIV? Estimates from four countries in East and Southern Africa

    Directory of Open Access Journals (Sweden)

    Emma Slaymaker

    2014-04-01

    Full Text Available Background: Substantial falls in the mortality of people living with HIV (PLWH have been observed since the introduction of antiretroviral therapy (ART in sub-Saharan Africa. However, access and uptake of ART have been variable in many countries. We report the excess deaths observed in PLWH before and after the introduction of ART. We use data from five longitudinal studies in Malawi, South Africa, Tanzania, and Uganda, members of the network for Analysing Longitudinal Population-based HIV/AIDS data on Africa (ALPHA. Methods: Individual data from five demographic surveillance sites that conduct HIV testing were used to estimate mortality attributable to HIV, calculated as the difference between the mortality rates in PLWH and HIV-negative people. Excess deaths in PLWH were standardized for age and sex differences and summarized over periods before and after ART became generally available. An exponential regression model was used to explore differences in the impact of ART over the different sites. Results: 127,585 adults across the five sites contributed a total of 487,242 person years. Before the introduction of ART, HIV-attributable mortality ranged from 45 to 88 deaths per 1,000 person years. Following ART availability, this reduced to 14–46 deaths per 1,000 person years. Exponential regression modeling showed a reduction of more than 50% (HR =0.43, 95% CI: 0.32–0.58, compared to the period before ART was available, in mortality at ages 15–54 across all five sites. Discussion: Excess mortality in adults living with HIV has reduced by over 50% in five communities in sub-Saharan Africa since the advent of ART. However, mortality rates in adults living with HIV are still 10 times higher than in HIV-negative people, indicating that substantial improvements can be made to reduce mortality further. This analysis shows differences in the impact across the sites, and contrasts with developed countries where mortality among PLWH on ART can be

  11. How many measurements are needed to estimate accurate daily and annual soil respiration fluxes? Analysis using data from a temperate rainforest

    Science.gov (United States)

    Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás

    2016-12-01

    Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we

  12. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    Science.gov (United States)

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results.

  13. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    Science.gov (United States)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  14. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 2: Internal cell signals and utility for state estimation

    Science.gov (United States)

    Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.

  15. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  16. Towards accurate dose accumulation for step- and -shoot IMRT. Impact of weighting schemes and temporal image resolution on the estimation of dosimetric motion effects

    Energy Technology Data Exchange (ETDEWEB)

    Werner, Rene; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz [Luebeck Univ. (Germany). Inst. of Medical Informatics; Albers, Dirk; Petersen, Cordula; Cremers, Florian [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Dept. of Radiotherapy and Radio-Oncology; Frenzel, Thorsten [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Health Care Center

    2012-07-01

    Purpose: Breathing-induced motion effects on dose distributions in radiotherapy can be analyzed using 4D CT image sequences and registration-based dose accumulation techniques. Often simplifying assumptions are made during accumulation. In this paper, we study the dosimetric impact of two aspects which may be especially critical for IMRT treatment: the weighting scheme for the dose contributions of IMRT segments at different breathing phases and the temporal resolution of 4D CT images applied for dose accumulation. Methods: Based on a continuous problem formulation a patient- and plan-specific scheme for weighting segment dose contributions at different breathing phases is derived for use in step- and -shoot IMRT dose accumulation. Using 4D CT data sets and treatment plans for 5 lung tumor patients, dosimetric motion effects as estimated by the derived scheme are compared to effects resulting from a common equal weighting approach. Effects of reducing the temporal image resolution are evaluated for the same patients and both weighting schemes. Results: The equal weighting approach underestimates dosimetric motion effects when considering single treatment fractions. Especially interplay effects (relative misplacement of segments due to respiratory tumor motion) for IMRT segments with only a few monitor units are insufficiently represented (local point differences > 25% of the prescribed dose for larger tumor motion). The effects, however, tend to be averaged out over the entire treatment course. Regarding temporal image resolution, estimated motion effects in terms of measures of the CTV dose coverage are barely affected (in comparison to the full resolution) when using only half of the original resolution and equal weighting. In contrast, occurence and impact of interplay effects are poorly captured for some cases (large tumor motion, undersized PTV margin) for a resolution of 10/14 phases and the more accurate patient- and plan-specific dose accumulation scheme

  17. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography: comparison with cine magnetic resonance imaging.

    Science.gov (United States)

    Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L

    2006-07-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.

  18. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography. Comparison with cine magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Coche, Emmanuel [Universite Catholique de Louvain, Division of Radiology, Brussels (Belgium); Gerber, Bernhard L. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Cliniques Universitaires St. Luc UCL, Department of Cardiology, Woluwe St. Lambert (Belgium)

    2006-07-15

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134{+-}51 and 67{+-}56 ml) were similar to those by MR (137{+-}57 and 70{+-}60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55{+-}21 vs. 56{+-}21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3{+-}1.8 vs. 8.8{+-}1.9 mm and 12.7{+-}3.4 vs. 13.3{+-}3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54{+-}30 vs. 51{+-}31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)

  19. Estimating the Economic Value of Ice Climbing in Hyalite Canyon: An Application of Travel Cost Count Data Models that Account for Excess Zeros*

    OpenAIRE

    Anderson, D. Mark

    2009-01-01

    Recently, the sport of ice climbing has seen a drastic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In additi...

  20. An Accurate FOA and TOA Estimation Algorithm for Galileo Search and Rescue Signal%伽利略搜救信号FOA和TOA精确估计算法

    Institute of Scientific and Technical Information of China (English)

    王堃; 吴嗣亮; 韩月涛

    2011-01-01

    According to the high precision demand of Frequency of Arrival(FOA) and Time of Arrival(TOA) estimation in Galileo search and rescue(SAR) system and considering the fact that the message bit width is unknown in real received beacons,a new FOA and TOA estimation algorithm which combines the multi-dimensional joint maximum likelihood estimation algorithm and barycenter calculation algorithm is proposed.The principle of the algorithm is derived after the signal model is introduced,and the concrete realization of the estimation algorithm is given.Monte Carlo simulation results and measurement results show that when CNR equals the threshold of 34.8 dBHz,FOA and TOA estimation rmse(root-mean-square error) of this algorithm are respectively within 0.03 Hz and 9.5 μs,which are better than the system requirements of 0.05 Hz and 11 μs.This algorithm has been applied to the Galileo Medium-altitude Earth Orbit Local User Terminal(MEOLUT station).%针对伽利略搜救系统中到达频率(FOA)和到达时间(TOA)高精度估计的需求,考虑到实际接收的信标信号中信息位宽未知的情况,提出了多维联合极大似然估计算法和体积重心算法相结合的FOA和TOA估计算法。在介绍信号模型的基础上推导了算法原理,给出了估计算法的具体实现过程。Monte Carlo仿真和实测结果表明,在34.8 dBHz的处理门限下,该算法得到的FOA和TOA估计的均方根误差分别小于0.03 Hz和9.5μs,优于0.05 Hz和11μs的系统指标要求。该算法目前已应用于伽利略中轨卫星地面用户终端(MEOLUT地面站)。

  1. Excessive acquisition in hoarding.

    Science.gov (United States)

    Frost, Randy O; Tolin, David F; Steketee, Gail; Fitch, Kristin E; Selbo-Bruns, Alexandra

    2009-06-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an Internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms.

  2. How accurately are maximal metabolic equivalents estimated based on the treadmill workload in healthy people and asymptomatic subjects with cardiovascular risk factors?

    Science.gov (United States)

    Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P

    2008-08-01

    Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.

  3. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC) models

    Science.gov (United States)

    Francoeur, Richard B

    2016-01-01

    Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad) mood. In this study, the first wave (2,812 elders) from the New Haven Epidemiological Study of the Elderly (EPESE) was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI]) and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters) simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold) depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27). The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR) in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1) older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes) and 2) older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. PMID:28003768

  4. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC models

    Directory of Open Access Journals (Sweden)

    Francoeur RB

    2016-12-01

    Full Text Available Richard B Francoeur School of Social Work, Adelphi University, Garden City, NY, USA Abstract: Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad mood. In this study, the first wave (2,812 elders from the New Haven Epidemiological Study of the Elderly (EPESE was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI] and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27. The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1 older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes and 2 older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. Keywords: depression, diabetes, overweight, cerebrovascular disease, hypertension, metabolic

  5. 基于生存贝塔分布的几何分布参数精确区间估计%Accurate interval estimation of geometric distribution parameter based on survival beta distribution

    Institute of Scientific and Technical Information of China (English)

    徐玉茹; 徐付霞

    2016-01-01

    证明了几何分布参数的充分统计量服从负二项分布,由此将负二项分布转化为生存贝塔分布,构造出了参数的精确置信区间,并且在不同的置信度组合中选出最佳组合,得到精确最短置信区间。讨论了大样本下几何分布的近似区间估计,通过数值模拟,直观展示区间估计的精度变化,说明了精确最短区间估计的优良性。%This paper proved that the sufficient statistic of a geometric distribution parameter is subjected to the negative binomial distribution .Therefore, constructed the exact confi-dence interval of the parameter by converting the negative binomial distribution into the sur -vival beta distribution, and select the best combination in different levels of the confidence to get the accurate shortest confidence interval of its parameter .The approximate interval esti-mate under the large sample of a geometric distribution was discussed in this paper .Through numerical simulation, the change of the accuracy of an interval estimation was intuitively demonstrated, and then the superiority of the accurate shortest confidence interval was illus -trated.

  6. Nonaccommodative convergence excess.

    Science.gov (United States)

    von Noorden, G K; Avilla, C W

    1986-01-15

    Nonaccommodative convergence excess is a condition in which a patient has orthotropia or a small-angle esophoria or esotropia at distance and a large-angle esotropia at near, not significantly reduced by the addition of spherical plus lenses. The AC/A ratio, determined with the gradient method, is normal or subnormal. Tonic convergence is suspected of causing the convergence excess in these patients. Nonaccommodative convergence excess must be distinguished from esotropia with a high AC/A ratio and from hypoaccommodative esotropia. In 24 patients treated with recession of both medial recti muscles with and without posterior fixation or by posterior fixation alone, the mean correction of esotropia was 7.4 prism diopters at distance and 17 prism diopters at near.

  7. Excessive crying in infants

    Directory of Open Access Journals (Sweden)

    Ricardo Halpern

    2016-06-01

    Full Text Available ABSTRACT Objective: Review the literature on excessive crying in young infants, also known as infantile colic, and its effects on family dynamics, its pathophysiology, and new treatment interventions. Data source: The literature review was carried out in the Medline, PsycINFO, LILACS, SciELO, and Cochrane Library databases, using the terms “excessive crying,” and “infantile colic,” as well technical books and technical reports on child development, selecting the most relevant articles on the subject, with emphasis on recent literature published in the last five years. Summary of the findings: Excessive crying is a common symptom in the first 3 months of life and leads to approximately 20% of pediatric consultations. Different prevalence rates of excessive crying have been reported, ranging from 14% to approximately 30% in infants up to 3 months of age. There is evidence linking excessive crying early in life with adaptive problems in the preschool period, as well as with early weaning, maternal anxiety and depression, attention deficit hyperactivity disorder, and other behavioral problems. Several pathophysiological mechanisms can explain these symptoms, such as circadian rhythm alterations, central nervous system immaturity, and alterations in the intestinal microbiota. Several treatment alternatives have been described, including behavioral measures, manipulation techniques, use of medication, and acupuncture, with controversial results and effectiveness. Conclusion: Excessive crying in the early months is a prevalent symptom; the pediatrician's attention is necessary to understand and adequately manage the problem and offer support to exhausted parents. The prescription of drugs of questionable action and with potential side effects is not a recommended treatment, except in extreme situations. The effectiveness of dietary treatments and use of probiotics still require confirmation. There is incomplete evidence regarding alternative

  8. Excess wind power

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    Expansion of wind power is an important element in Danish climate change abatement policy. Starting from a high penetration of approx 20% however, momentary excess production will become an important issue in the future. Through energy systems analyses using the EnergyPLAN model and economic...... analyses it is analysed how excess productions are better utilised; through conversion into hydrogen of through expansion of export connections thereby enabling sales. The results demonstrate that particularly hydrogen production is unviable under current costs but transmission expansion could...... be profitable particularly if transmission and dispatch companies operate under a feed-in tariff system....

  9. An accurate estimation algorithm for PRI based on remainder of the cycle%一种基于余数周期的 PRI精确估计算法

    Institute of Scientific and Technical Information of China (English)

    苏焕程; 张君; 程良平; 程亦涵; 冷魁

    2016-01-01

    针对传统脉冲重复间隔(PRI)分选算法在估计PRI方面存在的不足,提出了一种对PRI周期信号的周期进行精确估计的算法。该算法首先从待分选脉冲序列中提取出属于一部雷达的脉冲样本,然后利用同余方程的余数周期性质对该雷达脉冲序列的PRI进行精确的估计。相对于传统的PRI估计算法,该算法有效地消除了TOA量化误差对PRI估计造成的影响,可以精确地估计出雷达脉冲序列的准确 PRI数值,从而能够更好地满足信号分选算法的处理需求。理论推导及仿真实验均表明了该算法的有效性。%For the defects in estimating pulse repetition interval (PRI) of traditional PRI de‐interleaving al‐gorithms ,a new algorithm of estimating the period of PRI periodic signals is put forward .The algorithm firstly extracts the pulse sample sequence of certain radar from the raw pulse sequence ,and then executes precise period estimation process based on the remainder of the cycle .Compared with the traditional PRI estimating algorithms ,this algorithm accurately estimates the PRI with TOA quantization error removing , which satisfies the requirement of the de‐interleaving process . Both theoretical derivation and simulation results verify the validity of the proposed algorithm .

  10. The otherness of sexuality: excess.

    Science.gov (United States)

    Stein, Ruth

    2008-03-01

    The present essay, the second of a series of three, aims at developing an experience-near account of sexuality by rehabilitating the idea of excess and its place in sexual experience. It is suggested that various types of excess, such as excess of excitation (Freud), the excess of the other (Laplanche), excess beyond symbolization and the excess of the forbidden object of desire (Leviticus; Lacan) work synergistically to constitute the compelling power of sexuality. In addition to these notions, further notions of excess touch on its transformative potential. Such notions address excess that shatters psychic structures and that is actively sought so as to enable new ones to evolve (Bersani). Work is quoted that regards excess as a way of dealing with our lonely, discontinuous being by using the "excessive" cosmic energy circulating through us to achieve continuity against death (Bataille). Two contemporary analytic thinkers are engaged who deal with the object-relational and intersubjective vicissitudes of excess.

  11. Excess flow shutoff valve

    Energy Technology Data Exchange (ETDEWEB)

    Kiffer, Micah S.; Tentarelli, Stephen Clyde

    2016-02-09

    Excess flow shutoff valve comprising a valve body, a valve plug, a partition, and an activation component where the valve plug, the partition, and activation component are disposed within the valve body. A suitable flow restriction is provided to create a pressure difference between the upstream end of the valve plug and the downstream end of the valve plug when fluid flows through the valve body. The pressure difference exceeds a target pressure difference needed to activate the activation component when fluid flow through the valve body is higher than a desired rate, and thereby closes the valve.

  12. On the excess energy of nonequilibrium plasma

    Energy Technology Data Exchange (ETDEWEB)

    Timofeev, A. V. [National Research Centre Kurchatov Institute, Institute of Hydrogen Power Engineering and Plasma Technologies (Russian Federation)

    2012-01-15

    The energy that can be released in plasma due to the onset of instability (the excess plasma energy) is estimated. Three potentially unstable plasma states are considered, namely, plasma with an anisotropic Maxwellian velocity distribution of plasma particles, plasma with a two-beam velocity distribution, and an inhomogeneous plasma in a magnetic field with a local Maxwellian velocity distribution. The excess energy can serve as a measure of the degree to which plasma is nonequilibrium. In particular, this quantity can be used to compare plasmas in different nonequilibrium states.

  13. The High Price of Excessive Alcohol Consumption

    Centers for Disease Control (CDC) Podcasts

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  14. Accurate estimation of TOA and calibration of synchronization error for multilateration%多点定位TOA精确估计及同步误差校正算法

    Institute of Scientific and Technical Information of China (English)

    王洪; 金尔文; 刘昌忠; 吴宏刚

    2013-01-01

    提出了S模式信号的数学模型,讨论了脉冲上升沿测量到达时间(time of arrival,TOA)的精度、统计方法估计TOA的最优值和最优估计的实现方法.然后,提出了一种先解码后测量TOA的改进方法,从脉冲积累的角度导出了改进方法的理论精度,与单脉冲测量的精度相比较有明显提高.针对硬件实现的问题,分析了采样对TOA测量的影响和解决方法.最后,讨论了多点定位的同步问题,将TOA的精确估计值应用于多点定位系统多部接收机之间的同步误差校正.%A mathematical model of mode S signals is built. Accuracy of time of arrival (TOA) measurements by pulse rise edge and best statistical estimation methods are discussed. The way to realize the best estimation is also introduced. Then a novel method is proposed to measure the TOA of mode S signals, in which the measurement is performed after the decoding of mode S signals. The accuracy of the proposed method is improved significantly compared with the single pulse measurement, which can be derived from pulse integration. The influence of sampling on TOA measurement is analyzed and the corresponding solving method is introduced. Finally, synchronization in a multilateration system is discussed and the accurate TOA of signals is used for calibration of synchronization errors among receivers.

  15. Changing guards: time to move beyond body mass index for population monitoring of excess adiposity.

    Science.gov (United States)

    Tanamas, S K; Lean, M E J; Combet, E; Vlassopoulos, A; Zimmet, P Z; Peeters, A

    2016-07-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorization of related health risks. Combinations of anthropometric markers or predictive equations may facilitate better use of anthropometric data than single measures to estimate body composition for populations. Here, we provide new evidence that increasing proportions of aging populations are at high health-risk according to waist circumference, but not body mass index (BMI), so continued use of BMI as the principal population-level measure substantially underestimates the health-burden from excess adiposity.

  16. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  17. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    Institute of Scientific and Technical Information of China (English)

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  18. Excess attenuation of an acoustic beam by turbulence.

    Science.gov (United States)

    Pan, Naixian

    2003-12-01

    A theory based on the concept of a spatial sinusoidal diffraction grating is presented for the estimation of the excess attenuation in an acoustic beam. The equation of the excess attenuation coefficient shows that the excess attenuation of acoustic beam not only depends on the turbulence but also depends on the application parameters such as the beam width, the beam orientation and whether for forward propagation or back scatter propagation. Analysis shows that the excess attenuation appears to have a frequency dependence of cube-root. The expression for the excess attenuation coefficient has been used in the estimations of the temperature structure coefficient, C(T)2, in sodar sounding. The correction of C(T)2 values for excess attenuation reduces their errors greatly. Published profiles of temperature structure coefficient and the velocity structure coefficient in convective conditions are used to test our theory, which is compared with the theory by Brown and Clifford. The excess attenuation due to scattering from turbulence and atmospheric absorption are both taken into account in sodar data processing for deducing the contribution of the lower atmosphere to seeing, which is the sharpness of a telescope image determined by the degree of turbulence in the Earth's atmosphere. The comparison between the contributions of the lowest 300-m layer to seeing with that of the whole atmosphere supports the reasonableness of our estimation of excess attenuation.

  19. 移动机器人车载摄像机位姿的高精度快速求解%An accurate and fast pose estimation algorithm foron-board camera of mobile robot

    Institute of Scientific and Technical Information of China (English)

    唐庆顺; 吴春富; 李国栋; 王小龙; 周风余

    2015-01-01

    An accurate and fast pose estimation problem for on-board camera of mobile robot is investigated.Firstly the special properties of the pose for on-board camera of mobile robot are analyzed.Secondly,an auxiliary rotation matrix is constructed using the on-board camera’s equivalent rotation axis,which is utilized to turn the initial essential matrix and homography matrix into a simplified kind that can be decomposed through elementary mathematical operations.Fi-nally,some simulation experiments are designed to verify the algorithm’s rapidity,accuracy and robustness.The ex-perimental results show that compared to traditional algorithms,the proposed algorithm can acquire higher accuracy and faster calculating speed,together with the robustness to the disturbance of the on-board camera’s equivalent rotation ax-is.In addition,the number of possible solutions are reduced one half,and the unique rotation angle of the mobile robot can be determined except for the condition that the 3 D planar scene structure and the ground are perpendicular,which can provide great convenience for controlling the pose of the mobile robot.%在分析移动机器人车载摄像机位姿的特殊性质的基础上,根据摄像机的等效转轴构造辅助旋转矩阵,利用该旋转矩阵将原始待分解本质矩阵和单应矩阵转换为一类简单的、可通过初等数学运算进行分解的本质矩阵和单应矩阵。仿真实验的结果表明,该车载摄像机位姿估计算法较传统方法具有更高的精度和更快的运算速度,对摄像机等效转轴的扰动也具有很好的鲁棒性。此外,分解出的可能解的数目较传统算法减少了一半,且在除诱导单应阵的空间景物平面与地面垂直的情况下,均能直接得到移动机器人的唯一转角,为移动机器人姿态控制提供了极大的便利。

  20. Molar heat capacity and molar excess enthalpy measurements in aqueous amine solutions

    Science.gov (United States)

    Poozesh, Saeed

    Experimental measurements of molar heat capacity and molar excess enthalpy for 1, 4-dimethyl piperazine (1, 4-DMPZ), 1-(2-hydroxyethyl) piperazine (1, 2-HEPZ), I-methyl piperazine (1-MPZ), 3-morpholinopropyl amine (3-MOPA), and 4-(2-hydroxy ethyl) morpholine (4, 2-HEMO) aqueous solutions were carried out in a C80 heat flow calorimeter over a range of temperatures from (298.15 to 353.15) K and for the entire range of the mole fractions. The estimated uncertainty in the measured values of the molar heat capacity and molar excess enthalpy was found to be +/- 2%. Among the five amines studied, 3-MOPA had the highest values of the molar heat capacity and 1-MPZ the lowest. Values of molar heat capacities of amines were dominated by --CH 2, --N, --OH, --O, --NH2 groups and increased with increasing temperature, and contributions of --NH and --CH 3 groups decreased with increasing temperature for these cyclic amines. Molar excess heat capacities were calculated from the measured molar heat capacities and were correlated as a function of the mole fractions employing the Redlich-Kister equation. The molar excess enthalpy values were also correlated as a function of the mole fractions employing the Redlich-Kister equation. Molar enthalpies at infinite dilution were derived. Molar excess enthalpy values were modeled using the solution theory models: NRTL (Non Random Two Liquid) and UNIQUAC (UNIversal QUAsi Chemical) and the modified UNIFAC (UNIversal quasi chemical Functional group Activity Coefficients - Dortmund). The modified UNIFAC was found to be the most accurate and reliable model for the representation and prediction of the molar excess enthalpy values. Among the five amines, the 1-MPZ + water system exhibited the highest values of molar excess enthalpy on the negative side. This study confirmed the conclusion made by Maham et al. (71) that -CH3 group contributed to higher molar excess enthalpies. The negative excess enthalpies were reduced due to the contribution of

  1. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  2. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...... of antipronation shoes or insoles, which latest was studied by Kulce DG., et al (2007). So far there have been no randomized controlled studies showing methods that the effect of this treatment has not been documented. Therefore the authors can measure the effect of treatments with insoles. Some of the excessive...

  3. Widespread Excess Ice in Arcadia Planitia, Mars

    CERN Document Server

    Bramson, Ali M; Putzig, Nathaniel E; Sutton, Sarah; Plaut, Jeffrey J; Brothers, T Charles; Holt, John W

    2015-01-01

    The distribution of subsurface water ice on Mars is a key constraint on past climate, while the volumetric concentration of buried ice (pore-filling versus excess) provides information about the process that led to its deposition. We investigate the subsurface of Arcadia Planitia by measuring the depth of terraces in simple impact craters and mapping a widespread subsurface reflection in radar sounding data. Assuming that the contrast in material strengths responsible for the terracing is the same dielectric interface that causes the radar reflection, we can combine these data to estimate the dielectric constant of the overlying material. We compare these results to a three-component dielectric mixing model to constrain composition. Our results indicate a widespread, decameters-thick layer that is excess water ice ~10^4 km^3 in volume. The accumulation and long-term preservation of this ice is a challenge for current Martian climate models.

  4. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M;

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... is in pain but the effect of this treatment has not been documented. Therefore the authors wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  5. Does excessive pronation cause pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, R.G.; Rathleff, M.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... pronation patients recieve antipronation training often if the patient is in pain but wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  6. Excess mortality following hip fracture

    DEFF Research Database (Denmark)

    Abrahamsen, B; van Staa, T; Ariely, R;

    2009-01-01

    Summary This systematic literature review has shown that patients experiencing hip fracture after low-impact trauma are at considerable excess risk for death compared with nonhip fracture/community control populations. The increased mortality risk may persist for several years thereafter, highlig...

  7. Excessive masturbation after epilepsy surgery.

    Science.gov (United States)

    Ozmen, Mine; Erdogan, Ayten; Duvenci, Sirin; Ozyurt, Emin; Ozkara, Cigdem

    2004-02-01

    Sexual behavior changes as well as depression, anxiety, and organic mood/personality disorders have been reported in temporal lobe epilepsy (TLE) patients before and after epilepsy surgery. The authors describe a 14-year-old girl with symptoms of excessive masturbation in inappropriate places, social withdrawal, irritability, aggressive behavior, and crying spells after selective amygdalohippocampectomy for medically intractable TLE with hippocampal sclerosis. Since the family members felt extremely embarrassed, they were upset and angry with the patient which, in turn, increased her depressive symptoms. Both her excessive masturbation behavior and depressive symptoms remitted within 2 months of psychoeducative intervention and treatment with citalopram 20mg/day. Excessive masturbation is proposed to be related to the psychosocial changes due to seizure-free status after surgery as well as other possible mechanisms such as Kluver-Bucy syndrome features and neurophysiologic changes associated with the cessation of epileptic discharges. This case demonstrates that psychiatric problems and sexual changes encountered after epilepsy surgery are possibly multifactorial and in adolescence hypersexuality may be manifested as excessive masturbation behavior.

  8. Excess Early Mortality in Schizophrenia

    DEFF Research Database (Denmark)

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality as ...

  9. Determination of Enantiomeric Excess of Glutamic Acids by Lab-made Capillary Array Electrophoresis

    Institute of Scientific and Technical Information of China (English)

    Jun WANG; Kai Ying LIU; Li WANG; Ji Ling BAI

    2006-01-01

    Simulated enantiomeric excess of glutamic acid was determined by a lab-made sixteen-channel capillary array electrophoresis with confocal fluorescent rotary scanner. The experimental results indicated that the capillary array electrophoresis method can accurately determine the enantiomeric excess of glutamic acid and can be used for high-throughput screening system for combinatorial asymmetric catalysis.

  10. Severe rhabdomyolysis after excessive bodybuilding.

    Science.gov (United States)

    Finsterer, J; Zuntner, G; Fuchs, M; Weinberger, A

    2007-12-01

    A 46-year-old male subject performed excessive physical exertion during 4-6 h in a studio for body builders during 5 days. He was not practicing sport prior to this training and denied the use of any aiding substances. Despite muscle aching already after 1 day, he continued the exercises. After the last day, he recognized tiredness and cessation of urine production. Two days after discontinuation of the training, a Herpes simplex infection occurred. Because of acute renal failure, he required hemodialysis. There were absent tendon reflexes and creatine kinase (CK) values up to 208 274 U/L (normal: <170 U/L). After 2 weeks, CK had almost normalized and, after 4 weeks, hemodialysis was discontinued. Excessive muscle training may result in severe, hemodialysis-dependent rhabdomyolysis. Triggering factors may be prior low fitness level, viral infection, or subclinical metabolic myopathy.

  11. Diphoton Excess through Dark Mediators

    CERN Document Server

    Chen, Chien-Yi; Pospelov, Maxim; Zhong, Yi-Ming

    2016-01-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated $e^+e^-$ pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: $gg \\to S\\to A'A'\\to (e^+e^-)(e^+e^-)$ and $q\\bar q \\to Z' \\to sa\\to (e^+e^-)(e^+e^-)$, where at the first step a heavy scalar, $S$, or vector, $Z'$, resonances are produced that decay to light metastable vectors, $A'$, or (pseudo-)scalars, $s$ and $a$. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heav...

  12. The Cosmic Ray Electron Excess

    Science.gov (United States)

    Chang, J.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Christl, M.; Ganel, O.; Guzik, T. G.; Isbert, J.; Kim, K. C.; Kuznetsov, E. N.; Panasyuk, M. I.; Panov, A. D.; Schmidt, W. K. H.; Seo, E. S.; Sokolskaya, N. V.; Watts, J. W.; Wefel, J. P.; Wu, J.; Zatsepin, V. I.

    2008-01-01

    This slide presentation reviews the possible sources for the apparent excess of Cosmic Ray Electrons. The presentation reviews the Advanced Thin Ionization Calorimeter (ATIC) instrument, the various parts, how cosmic ray electrons are measured, and shows graphs that review the results of the ATIC instrument measurement. A review of Cosmic Ray Electrons models is explored, along with the source candidates. Scenarios for the excess are reviewed: Supernova remnants (SNR) Pulsar Wind nebulae, or Microquasars. Each of these has some problem that mitigates the argument. The last possibility discussed is Dark Matter. The Anti-Matter Exploration and Light-nuclei Astrophysics (PAMELA) mission is to search for evidence of annihilations of dark matter particles, to search for anti-nuclei, to test cosmic-ray propagation models, and to measure electron and positron spectra. There are slides explaining the results of Pamela and how to compare these with those of the ATIC experiment. Dark matter annihilation is then reviewed, which represent two types of dark matter: Neutralinos, and kaluza-Kline (KK) particles, which are next explained. The future astrophysical measurements, those from GLAST LAT, the Alpha Magnetic Spectrometer (AMS), and HEPCAT are reviewed, in light of assisting in finding an explanation for the observed excess. Also the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) could help by revealing if there are extra dimensions.

  13. 10 CFR 904.9 - Excess capacity.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF... Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall be entitled to such Excess Capacity to integrate the operation of the Boulder City Area Projects and...

  14. 12 CFR 925.23 - Excess stock.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Excess stock. 925.23 Section 925.23 Banks and... BANKS Stock Requirements § 925.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of this section, a member may purchase excess stock as long as the purchase is approved by...

  15. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  16. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  17. Excessive infant crying: The impact of varying definitions

    NARCIS (Netherlands)

    Reijneveld, S.A.; Brugman, E.; Hirasing, R.A.

    2001-01-01

    Objective. To assess the impact of varying definitions of excessive crying and infantile colic on prevalence estimates and to assess to what extent these definitions comprise the same children. Methods. Parents of 3345 infants aged 1, 3, and 6 months (response: 96.5%) were interviewed on the crying

  18. Excessive infant crying : The impact of varying definitions

    NARCIS (Netherlands)

    Reijneveld, SA; Brugman, E; Hirasing, RA

    2001-01-01

    Objective. To assess the impact of varying definitions of excessive crying and infantile colic on prevalence estimates and to assess to what extent these definitions comprise the same children. Methods. Parents of 3345 infants aged 1, 3, and 6 months (response: 96.5%) were interviewed on the crying

  19. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  20. Excess electron transport in cryoobjects

    CERN Document Server

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  1. Armodafinil for excessive daytime sleepiness.

    Science.gov (United States)

    Nishino, Seiji; Okuro, Masashi

    2008-06-01

    Armodafinil is the (R)-enantiomer of the wakepromoting compound modafinil (racemic), with a considerably longer half-life of 10-15 hours. Armodafinil (developed by Cephalon, Frazer, PA, USA) was approved in June 2007 for the treatment of excessive sleepiness associated with narcolepsy, obstructive sleep apnea syndrome and shift work disorder, and the indications are the same as those for modafinil. Like modafinil, the mechanisms of action of armodafinil are not fully characterized and are under debate. Clinical trials in these sleep disorders demonstrated an enhanced efficacy for wake promotion (wake sustained for a longer time period using doses lower than those of modafinil). The safety profile is consistent with that of modafinil, and armodafinil is well tolerated by the patients. Like modafinil, armodafinil is classified as a non-narcotic Schedule IV compound. Many patients with excessive sleepiness may prefer the longer duration of effect and may have better compliance (with low doses) with armodafinil. The commercial challenge to armodafinil may come from generic modafinil, which may become available in 2012, as well as from classical amphetamine and amphetamine-like compounds (for the treatment of narcolepsy).

  2. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    NARCIS (Netherlands)

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  3. Excess water dynamics in hydrotalcite: QENS study

    Indian Academy of Sciences (India)

    S Mitra; A Pramanik; D Chakrabarty; R Mukhopadhyay

    2004-08-01

    Results of the quasi-elastic neutron scattering (QENS) measurements on the dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random translational jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the change in binding of the water molecules to the host layer.

  4. 7 CFR 985.56 - Excess oil.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified...

  5. 10 CFR 904.10 - Excess energy.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  6. A Discussion on Mean Excess Plots

    CERN Document Server

    Ghosh, Souvik

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a Generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  7. Phytoextraction of excess soil phosphorus

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  8. Diphoton Excess and Running Couplings

    CERN Document Server

    Bae, Kyu Jung; Hamaguchi, Koichi; Moroi, Takeo

    2016-01-01

    The recently observed diphoton excess at the LHC may suggest the existence of a singlet (pseudo-) scalar particle with a mass of 750 GeV which couples to gluons and photons. Assuming that the couplings to gluons and photons originate from loops of fermions and/or scalars charged under the Standard Model gauge groups, we show that here is a model-independent upper bound on the cross section $\\sigma(pp\\to S\\to \\gamma\\gamma)$ as a function of the cutoff scale $\\Lambda$ and masses of the fermions and scalars in the loop. Such a bound comes from the fact that the contribution of each particle to the diphoton event amplitude is proportional to its contribution to the one-loop $\\beta$ functions of the gauge couplings. We also investigate the perturbativity of running Yukawa couplings in models with fermion loops, and show the upper bounds on $\\sigma(pp\\to S\\to \\gamma\\gamma)$ for explicit models.

  9. Sparse component separation for accurate CMB map estimation

    CERN Document Server

    Bobin, J; Sureau, F; Basak, S

    2012-01-01

    The Cosmological Microwave Background (CMB) is of premier importance for the cosmologists to study the birth of our universe. Unfortunately, most CMB experiments such as COBE, WMAP or Planck do not provide a direct measure of the cosmological signal; CMB is mixed up with galactic foregrounds and point sources. For the sake of scientific exploitation, measuring the CMB requires extracting several different astrophysical components (CMB, Sunyaev-Zel'dovich clusters, galactic dust) form multi-wavelength observations. Mathematically speaking, the problem of disentangling the CMB map from the galactic foregrounds amounts to a component or source separation problem. In the field of CMB studies, a very large range of source separation methods have been applied which all differ from each other in the way they model the data and the criteria they rely on to separate components. Two main difficulties are i) the instrument's beam varies across frequencies and ii) the emission laws of most astrophysical components vary a...

  10. 31 CFR 205.24 - How are accurate estimates maintained?

    Science.gov (United States)

    2010-07-01

    ...) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE RULES AND PROCEDURES FOR EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a... justify in writing that it is not feasible to use a more efficient basis for determining the amount...

  11. Accurate estimation of the elastic properties of porous fibers

    Energy Technology Data Exchange (ETDEWEB)

    Thissell, W.R.; Zurek, A.K.; Addessio, F.

    1997-05-01

    A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.

  12. Androgen excess: Investigations and management.

    Science.gov (United States)

    Lizneva, Daria; Gavrilova-Jordan, Larisa; Walker, Walidah; Azziz, Ricardo

    2016-11-01

    Androgen excess (AE) is a key feature of polycystic ovary syndrome (PCOS) and results in, or contributes to, the clinical phenotype of these patients. Although AE will contribute to the ovulatory and menstrual dysfunction of these patients, the most recognizable sign of AE includes hirsutism, acne, and androgenic alopecia or female pattern hair loss (FPHL). Evaluation includes not only scoring facial and body terminal hair growth using the modified Ferriman-Gallwey method but also recording and possibly scoring acne and alopecia. Moreover, assessment of biochemical hyperandrogenism is necessary, particularly in patients with unclear or absent hirsutism, and will include assessing total and free testosterone (T), and possibly dehydroepiandrosterone sulfate (DHEAS) and androstenedione, although these latter contribute limitedly to the diagnosis. Assessment of T requires use of the highest quality assays available, generally radioimmunoassays with extraction and chromatography or mass spectrometry preceded by liquid or gas chromatography. Management of clinical hyperandrogenism involves primarily either androgen suppression, with a hormonal combination contraceptive, or androgen blockade, as with an androgen receptor blocker or a 5α-reductase inhibitor, or a combination of the two. Medical treatment should be combined with cosmetic treatment including topical eflornithine hydrochloride and short-term (shaving, chemical depilation, plucking, threading, waxing, and bleaching) and long-term (electrolysis, laser therapy, and intense pulse light therapy) cosmetic treatments. Generally, acne responds to therapy relatively rapidly, whereas hirsutism is slower to respond, with improvements observed as early as 3 months, but routinely only after 6 or 8 months of therapy. Finally, FPHL is the slowest to respond to therapy, if it will at all, and it may take 12 to 18 months of therapy for an observable response.

  13. High Foreign Exchange Reserves Fuel Excess Liquidity

    Institute of Scientific and Technical Information of China (English)

    唐双宁

    2008-01-01

    This article views China’s excess liquidity problem in the global context. It suggests that market mechanisms, cooperation between all parties involved, and liquidity diversion, be resorted to in order to tackle the problem of excessive liquidity. This article also points out that the top priority is to solve the major problems, such as the current account surplus, the sources of excessive liquidity, the shortage of capital in rural areas, and the cause of capital distribution imbalance.

  14. Factors associated with excessive polypharmacy in older people

    OpenAIRE

    Walckiers, Denise; Van Der Heyden, Johan; Tafforeau, Jean

    2015-01-01

    Background Older people are a growing population. They live longer, but often have multiple chronic diseases. As a consequence, they are taking many different kind of medicines, while their vulnerability to pharmaceutical products is increased. The objective of this study is to describe the medicine utilization pattern in people aged 65 years and older in Belgium, and to estimate the prevalence and the determinants of excessive polypharmacy. Methods Data were used from the Belgian Health Inte...

  15. Initial report on characterization of excess highly enriched uranium

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  16. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Science.gov (United States)

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy.

  17. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  18. Excessive libido in a woman with rabies.

    OpenAIRE

    Dutta, J. K.

    1996-01-01

    Rabies is endemic in India in both wildlife and humans. Human rabies kills 25,000 to 30,000 persons every year. Several types of sexual manifestations including excessive libido may develop in cases of human rabies. A laboratory proven case of rabies in an Indian woman who manifested excessive libido is presented below. She later developed hydrophobia and died.

  19. Bladder calculus presenting as excessive masturbation.

    Science.gov (United States)

    De Alwis, A C D; Senaratne, A M R D; De Silva, S M P D; Rodrigo, V S D

    2006-09-01

    Masturbation in childhood is a normal behaviour which most commonly begins at 2 months of age, and peaks at 4 years and in adolescence. However excessive masturbation causes anxiety in parents. We describe a boy with a bladder calculus presenting as excessive masturbation.

  20. The excessively crying infant : etiology and treatment

    NARCIS (Netherlands)

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause i

  1. 11 CFR 9012.1 - Excessive expenses.

    Science.gov (United States)

    2010-01-01

    ... FINANCING UNAUTHORIZED EXPENDITURES AND CONTRIBUTIONS § 9012.1 Excessive expenses. (a) It shall be unlawful... expenses in excess of the aggregate payments to which the eligible candidates of a major party are entitled under 11 CFR part 9004 with respect to such election. (b) It shall be unlawful for the...

  2. Triboson interpretations of the ATLAS diboson excess

    CERN Document Server

    Aguilar-Saavedra, J A

    2015-01-01

    The ATLAS excess in fat jet pair production is kinematically compatible with the decay of a heavy resonance into two gauge bosons plus an extra particle. This possibility would explain the absence of such a localised excess in the analogous CMS analysis of fat dijet final states, as well as the negative results of diboson resonance searches in the semi-leptonic decay modes.

  3. Efficient and accurate fragmentation methods.

    Science.gov (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S

    2014-09-16

    Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum

  4. Excessive crying in infants with regulatory disorders.

    Science.gov (United States)

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents.

  5. Same-Sign Dilepton Excesses and Light Top Squarks

    CERN Document Server

    Huang, Peisi; Low, Ian; Wagner, Carlos E M

    2015-01-01

    Run 1 data of the Large Hadron Collider (LHC) contain excessive events in the same-sign dilepton channel with b-jets and missing transverse energy (MET), which were observed by five separate analyses from ATLAS and CMS collaborations. We show that these events could be explained by direct production of top squarks (stops) in supersymmetry. In particular, a right-handed stop with a mass of 550 GeV decaying into 2 t quarks, 2 W bosons, and MET could fit the observed excess without being constrained by other direct search limits from Run 1. We propose kinematic cuts at 13 TeV to enhance the stop signal, and estimate that stops could be discovered with 40 inverse fb of integrated luminosity at Run 2 of the LHC, when considering only the statistical uncertainty.

  6. Genetics Home Reference: aromatase excess syndrome

    Science.gov (United States)

    ... Sources for This Page Fukami M, Shozu M, Ogata T. Molecular bases and phenotypic determinants of aromatase ... T, Nishigaki T, Yokoya S, Binder G, Horikawa R, Ogata T. Aromatase excess syndrome: identification of cryptic duplications ...

  7. Controlling police (excessive force: The American case

    Directory of Open Access Journals (Sweden)

    Zakir Gül

    2013-09-01

    Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.

  8. Romanian welfare state between excess and failure

    Directory of Open Access Journals (Sweden)

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  9. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  10. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  11. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  12. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  13. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  14. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. Towards an accurate bioimpedance identification

    Science.gov (United States)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  16. Linear ion trap imperfection and the compensation of excess micromotion

    Institute of Scientific and Technical Information of China (English)

    Xie Yi; Wan Wei; Zhou Fei; Chen Liang; Li Chao-Hong; Feng Mang

    2012-01-01

    Quantum computing requires ultracold ions in a ground vibrational state,which is achieved by sideband cooling.We report our recent efforts towards the Lamb-Dicke regime which is a prerequisite of sideband cooling.We first analyse the possible imperfection in our linear ion trap setup and then demonstrate how to suppress the imperfection by compensating the excess micromotion of the ions.The ions,after the micromotion compensation,are estimated to be very close to the Doppler-cooling limit.

  17. Phenomenology and psychopathology of excessive indoor tanning.

    Science.gov (United States)

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning.

  18. Antidepressant induced excessive yawning and indifference

    Directory of Open Access Journals (Sweden)

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  19. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  20. Same-Sign Dilepton Excesses and Vector-like Quarks

    CERN Document Server

    Chen, Chuan-Ren; Low, Ian

    2015-01-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  1. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Singlet Scalar Resonances and the Diphoton Excess

    CERN Document Server

    McDermott, Samuel D; Ramani, Harikrishnan

    2015-01-01

    ATLAS and CMS recently released the first results of searches for diphoton resonances in 13 TeV data, revealing a modest excess at an invariant mass of approximately 750 GeV. We find that it is generically possible that a singlet scalar resonance is the origin of the excess while avoiding all other constraints. We highlight some of the implications of this model and how compatible it is with certain features of the experimental results. In particular, we find that the very large total width of the excess is difficult to explain with loop-level decays alone, pointing to other interesting bounds and signals if this feature of the data persists. Finally we comment on the robust Z-gamma signature that will always accompany the model we investigate.

  3. Minimal Dilaton Model and the Diphoton Excess

    CERN Document Server

    Agarwal, Bakul; Mohan, Kirtimaan A

    2016-01-01

    In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.

  4. Photoreceptor damage following exposure to excess riboflavin.

    Science.gov (United States)

    Eckhert, C D; Hsu, M H; Pang, N

    1993-12-15

    Flavins generate oxidants during metabolism and when exposed to light. Here we report that the photoreceptor layer of retinas from black-eyed rats is reduced in size by a dietary regime containing excess riboflavin. The effect of excess riboflavin was dose-dependent and was manifested by a decrease in photoreceptor length. This decrease was due in part to a reduction in the thickness of the outer nuclear layer, a structure formed from stacked photoreceptor nuclei. These changes were accompanied by an increase in photoreceptor outer segment autofluorescence following illumination at 328 nm, a wavelength that corresponds to the excitation maxima of oxidized lipopigments of the retinal pigment epithelium.

  5. New Galaxies with UV Excess. VI

    Science.gov (United States)

    Kazarian, M. A.; Petrosian, G. V.

    2005-07-01

    A list is presented of 122 new galaxies with UV excess observed on plates obtained using the 40″ Schmidt telescope at the Byurakan Observatory with a 1°.5 objective prism. It is shown that the relative number of galaxies with a strong UV excess (classes 1 and 2) listed in Table 1 is roughly 55.7%. This is 6.7% higher than for the previously observed galaxies. These samples also differ in terms of the morphology of the spectra. The largest deviation, approximately 9.9%, occurs for type “sd.”

  6. ORIGIN OF EXCESS (176)Hf IN METEORITES

    DEFF Research Database (Denmark)

    Thrane, Kristine; Connelly, James Norman; Bizzarro, Martin;

    2010-01-01

    After considerable controversy regarding the (176)Lu decay constant (lambda(176)Lu), there is now widespread agreement that (1.867 +/- 0.008) x 10(-11) yr(-1) as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the (176)Hf excesses that are correlated...... with Lu/Hf elemental ratios in meteorites older than similar to 4.56 Ga meteorites unresolved. We attribute (176)Hf excess in older meteorites to an accelerated decay of (176)Lu caused by excitation of the long-lived (176)Lu ground state to a short-lived (176m)Lu isomer. The energy needed to cause...

  7. The NANOGrav Nine-Year Data Set: Excess Noise in Millisecond Pulsar Arrival Times

    CERN Document Server

    Lam, M T; Chatterjee, S; Arzoumanian, Z; Crowter, K; Demorest, P B; Dolch, T; Ellis, J A; Ferdman, R D; Fonseca, E; Gonzalez, M E; Jones, G; Jones, M L; Levin, L; Madison, D R; McLaughlin, M A; Nice, D J; Pennucci, T T; Ransom, S M; Shannon, R M; Siemens, X; Stairs, I H; Stovall, K; Swiggum, J K; Zhu, W W

    2016-01-01

    Gravitational wave astronomy using a pulsar timing array requires high-quality millisecond pulsars, correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for millisecond pulsars observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar and we demonstrate that the excess noise has a red power spectrum for 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and freq...

  8. [Conservative and surgical treatment of convergence excess].

    Science.gov (United States)

    Ehrt, O

    2016-07-01

    Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.

  9. Can Excess Bilirubin Levels Cause Learning Difficulties?

    Science.gov (United States)

    Pretorius, E.; Naude, H.; Becker, P. J.

    2002-01-01

    Examined learning problems in South African sample of 7- to 14-year-olds whose mothers reported excessively high infant bilirubin shortly after the child's birth. Found that this sample had lowered verbal ability with the majority also showing impaired short-term and long-term memory. Findings suggested that impaired formation of astrocytes…

  10. Surface temperature excess in heterogeneous catalysis

    NARCIS (Netherlands)

    Zhu, L.

    2005-01-01

    In this dissertation we study the surface temperature excess in heterogeneous catalysis. For heterogeneous reactions, such as gas-solid catalytic reactions, the reactions take place at the interfaces between the two phases: the gas and the solid catalyst. Large amount of reaction heats are released

  11. Excessive Positivism in Person-Centered Planning

    Science.gov (United States)

    Holburn, Steve; Cea, Christine D.

    2007-01-01

    This paper illustrates the positivistic nature of person-centered planning (PCP) that is evident in the planning methods employed, the way that individuals with disabilities are described, and in portrayal of the outcomes of PCP. However, a confluence of factors can lead to manifestation of excessive positivism that does not serve PCP…

  12. 30 CFR 57.6902 - Excessive temperatures.

    Science.gov (United States)

    2010-07-01

    ... Requirements-Surface and Underground § 57.6902 Excessive temperatures. (a) Where heat could cause premature detonation, explosive material shall not be loaded into hot areas, such as kilns or sprung holes. (b) When... to address the specific conditions at the mine to prevent premature detonation....

  13. Excessive infant crying : definitions determine risk groups

    NARCIS (Netherlands)

    Reijneveld, SA; Brugman, E; Hirasing, RA

    2002-01-01

    We assessed risk groups for excessive infant crying using 10 published definitions, in 3179 children aged 1-6 months (response: 96.5%). Risk groups regarding parental employment, living area, lifestyle, and obstetric history varied by definition. This may explain the existence of conflicting evidenc

  14. Low excess air operations of oil boilers

    Energy Technology Data Exchange (ETDEWEB)

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin [Brookhaven National Labs., Upton, NY (United States)

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  15. Excessive nitrogen and phosphorus in European rivers

    NARCIS (Netherlands)

    Blaas, Harry; Kroeze, Carolien

    2016-01-01

    Rivers export nutrients to coastal waters. Excess nutrient export may result in harmful algal blooms and hypoxia, affecting biodiversity, fisheries, and recreation. The purpose of this study is to quantify for European rivers (1) the extent to which N and P loads exceed levels that minimize the r

  16. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every catchment of...

  18. Erasing errors due to alignment ambiguity when estimating positive selection.

    Science.gov (United States)

    Redelings, Benjamin

    2014-08-01

    Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments.

  19. Does a pneumotach accurately characterize voice function?

    Science.gov (United States)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  20. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  1. Quark Seesaw Vectorlike Fermions and Diphoton Excess

    CERN Document Server

    Dev, P S Bhupal; Zhang, Yongchao

    2015-01-01

    We present a possible interpretation of the recent diphoton excess reported by the $\\sqrt s=13$ TeV LHC data in quark seesaw left-right models with vectorlike fermions proposed to solve the strong $CP$ problem without the axion. The gauge singlet real scalar field responsible for the mass of the vectorlike fermions has the right production cross section and diphoton branching ratio to be identifiable with the reported excess at around 750 GeV diphoton invariant mass. Various ways to test this hypothesis as more data accumulates at the LHC are proposed. In particular, we find that for our interpretation to work, there is an upper limit on the right-handed scale $v_R$, which depends on the Yukawa coupling of singlet Higgs field to the vectorlike fermions.

  2. Relationship Between Thermal Tides and Radius Excess

    CERN Document Server

    Socrates, Aristotle

    2013-01-01

    Close-in extrasolar gas giants -- the hot Jupiters -- display departures in radius above the zero-temperature solution, the radius excess, that are anomalously high. The radius excess of hot Jupiters follows a relatively close relation with thermal tidal tidal torques and holds for ~ 4-5 orders of magnitude in a characteristic thermal tidal power in such a way that is consistent with basic theoretical expectations. The relation suggests that thermal tidal torques determine the global thermodynamic and spin state of the hot Jupiters. On empirical grounds, it is shown that theories of hot Jupiter inflation that invoke a constant fraction of the stellar flux to be deposited at great depth are, essentially, falsified.

  3. Excess mortality in giant cell arteritis

    DEFF Research Database (Denmark)

    Bisgård, C; Sloth, H; Keiding, Niels

    1991-01-01

    A 13-year departmental sample of 34 patients with definite (biopsy-verified) giant cell arteritis (GCA) was reviewed. The mortality of this material was compared to sex-, age- and time-specific death rates in the Danish population. The standardized mortality ratio (SMR) was 1.8 (95% confidence...... with respect to SMR, sex distribution or age. In the group of patients with department-diagnosed GCA (definite + probable = 180 patients), the 95% confidence interval for the SMR of the women included 1.0. In all other subgroups there was a significant excess mortality. Excess mortality has been found in two...... of seven previous studies on survival in GCA. The prevailing opinion that steroid-treated GCA does not affect the life expectancy of patients is probably not correct....

  4. ILLUSION OF EXCESSIVE CONSUMPTION AND ITS EFFECTS

    Directory of Open Access Journals (Sweden)

    MUNGIU-PUPĂZAN MARIANA CLAUDIA

    2015-12-01

    Full Text Available The aim is to explore, explain and describe this phenomenon to a better understanding of it and also the relationship between advertising and the consumer society members. This paper aims to present an analysis of excessive and unsustainable consumption, the evolution of a phenomenon, and the ability to find a way to combat. Unfortunately, studies show that this tendency to accumulate more than we need to consume excess means that almost all civilizations fined and placed dogmatic among the values that children learn early in life. This has been perpetuated since the time when the goods or products does not get so easy as today. Anti-consumerism has emerged in response to this economic system, not on the long term. We are witnessing the last two decades to establish a new phase of consumer capitalism: society hiperconsumtion.

  5. Armodafinil in the treatment of excessive sleepiness

    OpenAIRE

    Rosenberg, Russell

    2010-01-01

    Russell Rosenberg1, Richard Bogan21NeuroTrials Research, Atlanta, Georgia, USA; 2SleepMed of South Carolina, Columbia, South Carolina, USAAbstract: Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral...

  6. Armodafinil in the treatment of excessive sleepiness

    OpenAIRE

    Rosenberg, Russell; Bogan, Richard

    2010-01-01

    Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/ wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral measures, mechanical devices, and pharmacologic agents. This review explores the evidence supporting the use of armodafinil to treat ES associated...

  7. Contrast induced hyperthyroidism due to iodine excess

    OpenAIRE

    Mushtaq, Usman; Price, Timothy; Laddipeerla, Narsing; Townsend, Amanda; Broadbridge, Vy

    2009-01-01

    Iodine induced hyperthyroidism is a thyrotoxic condition caused by exposure to excessive iodine. Historically this type of hyperthyroidism has been described in areas of iodine deficiency. With advances in medicine, iodine induced hyperthyroidism has been observed following the use of drugs containing iodine—for example, amiodarone, and contrast agents used in radiological imaging. In elderly patients it is frequently difficult to diagnose and control contrast related hyperthyroidism, as most...

  8. Identifying excessive credit growth and leverage

    OpenAIRE

    Alessi, Lucia; Detken, Carsten

    2014-01-01

    This paper aims at providing policymakers with a set of early warning indicators helpful in guiding decisions on when to activate macroprudential tools targeting excessive credit growth and leverage. To robustly select the key indicators we apply the “Random Forest” method, which bootstraps and aggregates a multitude of decision trees. On these identified key indicators we grow a binary classification tree which derives the associated optimal early warning thresholds. By using credit to GDP g...

  9. Excessive Profits of German Defense Contractors

    Science.gov (United States)

    2014-09-01

    revealed similar patterns in both countries. The statistical evidence for excessive profitability is stronger for the measurements return on assets ...types of tangible and intangible things that have an economic value, and there are current assets that can be transformed into cash within a short period...The Rate of Return on Assets (ROA) measures a firm’s performance in using assets to generate net income” (Stickney et al., 2010, p. 245). In

  10. Earnings Quality Measures and Excess Returns.

    Science.gov (United States)

    Perotti, Pietro; Wagenhofer, Alfred

    2014-06-01

    This paper examines how commonly used earnings quality measures fulfill a key objective of financial reporting, i.e., improving decision usefulness for investors. We propose a stock-price-based measure for assessing the quality of earnings quality measures. We predict that firms with higher earnings quality will be less mispriced than other firms. Mispricing is measured by the difference of the mean absolute excess returns of portfolios formed on high and low values of a measure. We examine persistence, predictability, two measures of smoothness, abnormal accruals, accruals quality, earnings response coefficient and value relevance. For a large sample of US non-financial firms over the period 1988-2007, we show that all measures except for smoothness are negatively associated with absolute excess returns, suggesting that smoothness is generally a favorable attribute of earnings. Accruals measures generate the largest spread in absolute excess returns, followed by smoothness and market-based measures. These results lend support to the widespread use of accruals measures as overall measures of earnings quality in the literature.

  11. The entropy excess and moment of inertia excess ratio with inclusion of statistical pairing fluctuations

    Science.gov (United States)

    Razavi, R.; Dehghani, V.

    2014-03-01

    The entropy excess of 163Dy compared to 162Dy as a function of nuclear temperature have been investigated using the mean value Bardeen-Cooper-Schrieffer (BCS) method based on application of the isothermal probability distribution function to take into account the statistical fluctuations. Then, the spin cut-off excess ratio (moment of inertia excess ratio) introduced by Razavi [Phys. Rev. C88 (2013) 014316] for proton and neutron system have been obtained and are compared with their corresponding data on the BCS model. The results show that the overall agreement between the BCS model and mean value BCS method is satisfactory and the mean value BCS model reduces fluctuations and washes out singularities. However, the expected constant value in the entropy excess is not reproduced by the mean value BCS method.

  12. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions

    OpenAIRE

    Edward Khawam; Bachir Abiad; Alaa Boughannam; Joanna Saade; Ramzi Alameddine

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies ...

  13. Partitioning of excess mortality in population-based cancer patient survival studies using flexible parametric survival models

    Directory of Open Access Journals (Sweden)

    Eloranta Sandra

    2012-06-01

    Full Text Available Abstract Background Relative survival is commonly used for studying survival of cancer patients as it captures both the direct and indirect contribution of a cancer diagnosis on mortality by comparing the observed survival of the patients to the expected survival in a comparable cancer-free population. However, existing methods do not allow estimation of the impact of isolated conditions (e.g., excess cardiovascular mortality on the total excess mortality. For this purpose we extend flexible parametric survival models for relative survival, which use restricted cubic splines for the baseline cumulative excess hazard and for any time-dependent effects. Methods In the extended model we partition the excess mortality associated with a diagnosis of cancer through estimating a separate baseline excess hazard function for the outcomes under investigation. This is done by incorporating mutually exclusive background mortality rates, stratified by the underlying causes of death reported in the Swedish population, and by introducing cause of death as a time-dependent effect in the extended model. This approach thereby enables modeling of temporal trends in e.g., excess cardiovascular mortality and remaining cancer excess mortality simultaneously. Furthermore, we illustrate how the results from the proposed model can be used to derive crude probabilities of death due to the component parts, i.e., probabilities estimated in the presence of competing causes of death. Results The method is illustrated with examples where the total excess mortality experienced by patients diagnosed with breast cancer is partitioned into excess cardiovascular mortality and remaining cancer excess mortality. Conclusions The proposed method can be used to simultaneously study disease patterns and temporal trends for various causes of cancer-consequent deaths. Such information should be of interest for patients and clinicians as one way of improving prognosis after cancer is

  14. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  15. Excess mid-IR emission in Cataclysmic Variables

    CERN Document Server

    Dubus, G; Kern, B; Taam, R E; Spruit, H C

    2004-01-01

    We present a search for excess mid-IR emission due to circumbinary material in the orbital plane of cataclysmic variables (CVs). Our motivation stems from the fact that the strong braking exerted by a circumbinary (CB) disc on the binary system could explain several puzzles in our current understanding of CV evolution. Since theoretical estimates predict that the emission from a CB disc can dominate the spectral energy distribution (SED) of the system at wavelengths > 5 microns, we obtained simultaneous visible to mid-IR SEDs for eight systems. We report detections of SS Cyg at 11.7 microns and AE Aqr at 17.6 microns, both in excess of the contribution from the secondary star. In AE Aqr, the IR likely originates from synchrotron-emitting clouds propelled by the white dwarf. In SS Cyg, we argue that the observed mid-IR variability is difficult to reconcile with simple models of CB discs and we consider free-free emission from a wind. In the other systems, our mid-IR upper limits place strong constraints on the...

  16. Excess mortality among the unmarried: a case study of Japan.

    Science.gov (United States)

    Goldman, N; Hu, Y

    1993-02-01

    Recent research has demonstrated that mortality patterns by marital status in Japan are different from corresponding patterns in other industrialized countries. Most notably, the magnitude of the excess mortality experienced by single Japanese has been staggering. For example, estimates of life expectancy for the mid-1900s indicate that single Japanese men and women had life expectancies between 15 and 20 years lower than their married counterparts. In addition, gender differences among single Japanese have been smaller than elsewhere, while those among divorced persons have been unanticipatedly large; and, the excess mortality of the Japanese single population has been decreasing over the past few decades in contrast to generally increasing differentials elsewhere. In this paper, we use a variety of data sources to explore several explanations for these unique mortality patterns in Japan. Undeniably, the traditional Japanese system of arranged marriages makes the process of selecting a spouse a significant factor. Evidence from anthropological studies and attitudinal surveys indicates that marriage is likely to have been and probably continues to be more selective with regard to underlying health characteristics in Japan than in other industrialized countries. However, causal explanations related to the importance of marriage and the family in Japanese society may also be responsible for the relatively high mortality experienced by singles and by divorced men.

  17. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Science.gov (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  18. CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis

    CERN Document Server

    Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,

    2011-01-01

    We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...

  19. Changing guards: time to move beyond Body Mass Index for population monitoring of excess adiposity

    OpenAIRE

    Tanamas, Stephanie K.; Lean, Michael E. J.; Combet, Emilie; Vlassopoulos, Antonios; Zimmet, Paul Z.; Peeters, Anna

    2016-01-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorisation...

  20. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven

    2012-07-01

    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  1. A fast and accurate FPGA based QRS detection system.

    Science.gov (United States)

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  2. The Excess Radio Background and Fast Radio Transients

    CERN Document Server

    Kehayias, John; Weiler, Thomas J

    2015-01-01

    In the last few years ARCADE 2, combined with older experiments, has detected an additional radio background, measured as a temperature and ranging in frequency from 22 MHz to 10 GHz, not accounted for by known radio sources and the cosmic microwave background. One type of source which has not been considered in the radio background is that of fast transients (those with event times much less than the observing time). We present a simple estimate, and a more detailed calculation, for the contribution of radio transients to the diffuse background. As a timely example, we estimate the contribution from the recently-discovered fast radio bursts (FRBs). Although their contribution is likely 6 or 7 orders of magnitude too small (though there are large uncertainties in FRB parameters) to account for the ARCADE~2 excess, our development is general and so can be applied to any fast transient sources, discovered or yet to be discovered. We estimate parameter values necessary for transient sources to noticeably contrib...

  3. Excessive Internet use: implications for sexual behavior

    OpenAIRE

    Griffiths, M

    2000-01-01

    The Internet appears to have become an ever-increasing part in many areas of people’s day-to- day lives. One area that deserves further examination surrounds sexual behavior and excessive Internet usage. It has been alleged by some academics that social pathologies are beginning to surface in cyberspace and have been referred to as “technological addictions.” Such research may have implications and insights into sexuality and sexual behavior. Therefore, this article examines the concept of “I...

  4. Excessive prices as abuse of dominance?

    DEFF Research Database (Denmark)

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    In previous research, we found that the sole Danish producer of cement holds a dominant position in the Danish market for (grey) cement. We are able to identify an inelastic long-run demand relation that would seem to permit the exercise of market power. We aim to establish whether the dominant...... firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  5. Simple laboratory determination of excess oligosacchariduria.

    Science.gov (United States)

    Sewell, A C

    1981-02-01

    I describe a simple set of procedures for the screening of patients' urine to detect oligosaccharide-storage diseases. Urines from patients with mucolipidosis I, mannosidosis, fucosidosis, aspartylglycosaminuria, and type VI glycogen-storage disease can be distinguished by thin-layer chromatography. Patients with beta-galactosidase deficiency can be detected by use of a combination of ion-exchange and thin-layer chromatography. Excess sialyloligosaccharide excretion is detected by using gel filtration and a quantitative assay for neuraminic acid. The advantages of the system are detection of virtually all known disorders in which oligosaccharides are over-excreted, production of characteristic patterns, and small sample requirement.

  6. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Science.gov (United States)

    Ku, Kevin; Sue, Gloria R.

    2015-01-01

    In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol. PMID:26904700

  7. [Excessive spending by misuse of clinical laboratory].

    Science.gov (United States)

    Benítez-Arvizu, Gamaliel; Novelo-Garza, Bárbara; Mendoza-Valdez, Antonia Lorena; Galván-Cervantes, Jorge; Morales-Rojas, Alejandro

    2016-01-01

    Seventy five percent or more of a diagnosis comes from a proper medical history along with an excellent physical examination. This leaves to the clinical laboratory the function of supporting the findings, determining prognosis, classifying the diseases, monitoring the diseases and, in the minimum of cases, establishing the diagnosis. In recent years there has been a global phenomenon in which the allocation of resources to health care has grown in an excessive way; the Instituto Mexicano del Seguro Social is not an exception with an increase of 29 % from 2009 to 2011; therefore, it is necessary to set containment and reduction without compromising the quality of patient care.

  8. Subcorneal hematomas in excessive video game play.

    Science.gov (United States)

    Lennox, Maria; Rizzo, Jason; Lennox, Luke; Rothman, Ilene

    2016-01-01

    We report a case of subcorneal hematomas caused by excessive video game play in a 19-year-old man. The hematomas occurred in a setting of thrombocytopenia secondary to induction chemotherapy for acute myeloid leukemia. It was concluded that thrombocytopenia subsequent to prior friction from heavy use of a video game controller allowed for traumatic subcorneal hemorrhage of the hands. Using our case as a springboard, we summarize other reports with video game associated pathologies in the medical literature. Overall, cognizance of the popularity of video games and related pathologies can be an asset for dermatologists who evaluate pediatric patients.

  9. Real-time total system error estimation:Modeling and application in required navigation performance

    Institute of Scientific and Technical Information of China (English)

    Fu Li; Zhang Jun; Li Rui

    2014-01-01

    In required navigation performance (RNP), total system error (TSE) is estimated to pro-vide a timely warning in the presence of an excessive error. In this paper, by analyzing the under-lying formation mechanism, the TSE estimation is modeled as the estimation fusion of a fixed bias and a Gaussian random variable. To address the challenge of high computational load induced by the accurate numerical method, two efficient methods are proposed for real-time application, which are called the circle tangent ellipse method (CTEM) and the line tangent ellipse method (LTEM), respectively. Compared with the accurate numerical method and the traditional scalar quantity summation method (SQSM), the computational load and accuracy of these four methods are exten-sively analyzed. The theoretical and experimental results both show that the computing time of the LTEM is approximately equal to that of the SQSM, while it is only about 1/30 and 1/6 of that of the numerical method and the CTEM. Moreover, the estimation result of the LTEM is parallel with that of the numerical method, but is more accurate than those of the SQSM and the CTEM. It is illustrated that the LTEM is quite appropriate for real-time TSE estimation in RNP application.

  10. Connecting the LHC diphoton excess to the Galatic center gamma-ray excess

    CERN Document Server

    Huang, Xian-Jun; Zhou, Yu-Feng

    2016-01-01

    The recent LHC Run-2 data have shown a possible excess in diphoton events, suggesting the existence of a new resonance $\\phi$ with mass $M\\sim 750$~GeV. If $\\phi$ plays the role of a portal particle connecting the Standard Model and the invisible dark sector, the diphoton excess should be correlated with another photon excess, namely, the excess in the diffuse gamma rays towards the Galactic center, which can be interpreted by the annihilation of dark matter(DM). We investigate the necessary conditions for a consistent explanation for the two photon excesses, especially the requirement on the width-to-mass ratio $\\Gamma/M$ and $\\phi$ decay channels, in a collection of DM models where the DM particle can be scalar, fermionionic and vector, and $\\phi$ can be generated through $s$-channel $gg$ fusion or $q\\bar q$ annihilation. We show that the minimally required $\\Gamma/M$ is determined by a single parameter proportional to $(m_{\\chi}/M)^{n}$, where the integer $n$ depends on the nature of the DM particle. We fi...

  11. Z-peaked excess in goldstini scenarios

    CERN Document Server

    Liew, Seng Pei; Mawatari, Kentarou; Sakurai, Kazuki; Vereecken, Matthias

    2015-01-01

    We study a possible explanation of a 3.0 $\\sigma$ excess recently reported by the ATLAS Collaboration in events with Z-peaked same-flavour opposite-sign lepton pair, jets and large missing transverse momentum in the context of gauge-mediated SUSY breaking with more than one hidden sector, the so-called goldstini scenario. In a certain parameter space, the gluino two-body decay chain $\\tilde g\\to g\\tilde\\chi^0_{1,2}\\to gZ\\tilde G'$ becomes dominant, where $\\tilde\\chi^0_{1,2}$ and $\\tilde G'$ are the Higgsino-like neutralino and the massive pseudo-goldstino, respectively, and gluino pair production can contribute to the signal. We find that a mass spectrum such as $m_{\\tilde g}\\sim 900$ GeV, $m_{\\tilde\\chi^0_{1,2}}\\sim 700$ GeV and $m_{\\tilde G'}\\sim 600$ GeV demonstrates the rate and the distributions of the excess, without conflicting with the stringent constraints from jets plus missing energy analyses and with the CMS constraint on the identical final state.

  12. Moderate excess of pyruvate augments osteoclastogenesis

    Directory of Open Access Journals (Sweden)

    Jenna E. Fong

    2013-03-01

    Cell differentiation leads to adaptive changes in energy metabolism. Conversely, hyperglycemia induces malfunction of many body systems, including bone, suggesting that energy metabolism reciprocally affects cell differentiation. We investigated how the differentiation of bone-resorbing osteoclasts, large polykaryons formed through fusion and growth of cells of monocytic origin, is affected by excess of energy substrate pyruvate and how energy metabolism changes during osteoclast differentiation. Surprisingly, small increases in pyruvate (1–2 mM above basal levels augmented osteoclastogenesis in vitro and in vivo, while larger increases were not effective in vitro. Osteoclast differentiation increased cell mitochondrial activity and ATP levels, which were further augmented in energy-rich conditions. Conversely, the inhibition of respiration significantly reduced osteoclast number and size. AMP-activated protein kinase (AMPK acts as a metabolic sensor, which is inhibited in energy-rich conditions. We found that osteoclast differentiation was associated with an increase in AMPK levels and a change in AMPK isoform composition. Increased osteoclast size induced by pyruvate (1 mM above basal levels was prevented in the presence of AMPK activator 5-amino-4-imidazole carboxamide ribonucleotide (AICAR. In keeping, inhibition of AMPK using dorsomorphin or siRNA to AMPKγ increased osteoclast size in control cultures to the level observed in the presence of pyruvate. Thus, we have found that a moderate excess of pyruvate enhances osteoclastogenesis, and that AMPK acts to tailor osteoclastogenesis to a cell's bioenergetics capacity.

  13. Surgery for residual convergence excess esotropia.

    Science.gov (United States)

    Patel, Himanshu I; Dawson, Emma; Lee, John

    2011-12-01

    The outcome of bilateral medial rectus posterior fixation sutures +/- central tenotomy was assessed as a secondary procedure for residual convergence excess esotropia in 11 patients. Ten had previously undergone bilateral medial rectus recessions. One had recess/resect surgery on the deviating eye. The average preoperative near angle was 30 prism diopters with a range of 16 to 45 prism diopters. Eight patients underwent bilateral medial rectus posterior fixation sutures with central tenotomy. Two had bilateral medial rectus posterior fixation sutures only, and one had bilateral medial rectus posterior fixation suture, a lateral rectus resection, and an inferior oblique disinsertion. The postoperative near angle ranged from 4-30 prism diopters, with mean of 12 prism diopters. Five patients demonstrated some stereopsis preoperatively, all needing bifocals. Postoperatively, nine patients demonstrated an improvement in stereopsis, none needing bifocals. Two showed smaller near angles and better control without bifocals. Final stereopsis ranged from 30 seconds of arc to 800 seconds of arc. We feel that bilateral medial rectus posterior fixation sutures with or without central tenotomy is a viable secondary procedure for residual convergence excess esotropia.

  14. Mapping interfacial excess in atom probe data

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, Peter, E-mail: peter.felfer@sydney.edu.au [School of Aerospace Mechanical and Mechatronic Engineering, The University of Sydney (Australia); Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia); Scherrer, Barbara [Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia); Eidgenossische Technische Hochschule Zürich (Switzerland); Demeulemeester, Jelle [Imec vzw, Kapeldreef 75, Heverlee 3001 (Belgium); Vandervorst, Wilfried [Imec vzw, Kapeldreef 75, Heverlee 3001 (Belgium); Instituut voor Kern- en Stralingsfysica, KU Leuven, Celestijnenlaan 200D, B-3001 Leuven (Belgium); Cairney, Julie M. [School of Aerospace Mechanical and Mechatronic Engineering, The University of Sydney (Australia); Australian Centre for Microscopy and Microanalysis, The University of Sydney (Australia)

    2015-12-15

    Using modern wide-angle atom probes, it is possible to acquire atomic scale 3D data containing 1000 s of nm{sup 2} of interfaces. It is therefore possible to probe the distribution of segregated species across these interfaces. Here, we present techniques that allow the production of models for interfacial excess (IE) mapping and discuss the underlying considerations and sampling statistics. We also show, how the same principles can be used to achieve thickness mapping of thin films. We demonstrate the effectiveness on example applications, including the analysis of segregation to a phase boundary in stainless steel, segregation to a metal–ceramic interface and the assessment of thickness variations of the gate oxide in a fin-FET. - Highlights: • Using computational geometry, interfacial excess can be mapped for various features in APT. • Suitable analysis models can be created by combining manual modelling and mesh generation algorithms. • Thin film thickness can be mapped with high accuracy using this technique.

  15. Extragalactic Gamma Ray Excess from Coma Supercluster Direction

    Indian Academy of Sciences (India)

    Pantea Davoudifar; S. Jalil Fatemi

    2011-09-01

    The origin of extragalactic diffuse gamma ray is not accurately known, especially because our suggestions are related to many models that need to be considered either to compute the galactic diffuse gamma ray intensity or to consider the contribution of other extragalactic structures while surveying a specific portion of the sky. More precise analysis of EGRET data however, makes it possible to estimate the diffuse gamma ray in Coma supercluster (i.e., Coma\\A1367 supercluster) direction with a value of ( > 30MeV) ≃ 1.9 × 10-6 cm-2 s-1, which is considered to be an upper limit for the diffuse gamma ray due to Coma supercluster. The related total intensity (on average) is calculated to be ∼ 5% of the actual diffuse extragalactic background. The calculated intensity makes it possible to estimate the origin of extragalactic diffuse gamma ray.

  16. Short inter-pregnancy intervals, parity, excessive pregnancy weight gain and risk of maternal obesity.

    Science.gov (United States)

    Davis, Esa M; Babineau, Denise C; Wang, Xuelei; Zyzanski, Stephen; Abrams, Barbara; Bodnar, Lisa M; Horwitz, Ralph I

    2014-04-01

    To investigate the relationship among parity, length of the inter-pregnancy intervals and excessive pregnancy weight gain in the first pregnancy and the risk of obesity. Using a prospective cohort study of 3,422 non-obese, non-pregnant US women aged 14-22 years at baseline, adjusted Cox models were used to estimate the association among parity, inter-pregnancy intervals, and excessive pregnancy weight gain in the first pregnancy and the relative hazard rate (HR) of obesity. Compared to nulliparous women, primiparous women with excessive pregnancy weight gain in the first pregnancy had a HR of obesity of 1.79 (95% CI 1.40, 2.29); no significant difference was seen between primiparous without excessive pregnancy weight gain in the first pregnancy and nulliparous women. Among women with the same pregnancy weight gain in the first pregnancy and the same number of inter-pregnancy intervals (12 and 18 months or ≥18 months), the HR of obesity increased 2.43-fold (95% CI 1.21, 4.89; p = 0.01) for every additional inter-pregnancy interval of pregnancy intervals. Among women with the same parity and inter-pregnancy interval pattern, women with excessive pregnancy weight gain in the first pregnancy had an HR of obesity 2.41 times higher (95% CI 1.81, 3.21; p obesity risk unless the primiparous women had excessive pregnancy weight gain in the first pregnancy, then their risk of obesity was greater. Multiparous women with the same excessive pregnancy weight gain in the first pregnancy and at least one additional short inter-pregnancy interval had a significant risk of obesity after childbirth. Perinatal interventions that prevent excessive pregnancy weight gain in the first pregnancy or lengthen the inter-pregnancy interval are necessary for reducing maternal obesity.

  17. Excessive Alcohol Use and Risks to Women's Health

    Science.gov (United States)

    ... Spectrum Disorders (FASDs) Impaired Driving Fact Sheets - Excessive Alcohol Use and Risks to Women's Health Recommend on Facebook Tweet Share Compartir Excessive Alcohol Use and Risks to Women’s Health Although men ...

  18. Excessive Alcohol Use and Risks to Men's Health

    Science.gov (United States)

    ... Spectrum Disorders (FASDs) Impaired Driving Fact Sheets - Excessive Alcohol Use and Risks to Men's Health Recommend on Facebook Tweet Share Compartir Excessive Alcohol Use and Risks to Men's Health Men are ...

  19. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2013-08-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal time scales. There is no convincing evidence that RH might be less important for long-term palaeoclimatic d changes compared to moisture source temperature variations. Ice core d data may thus have to be reinterpreted, focusing on climatic influences on relative humidity during evaporation, in particular related to atmospheric circulation changes.

  20. Spectrophotometric Study of Galaxies with UV Excess

    Science.gov (United States)

    Kazarian, M. A.; Karapetian, E. L.

    2004-01-01

    Results from a spectrophotometric study of 21 galaxies with UV excess are presented. The half widths (FWHM) and equivalent widths of observed spectrum lines of these galaxies, as well as the relative intensities of the emission lines observed in the spectrum of the galaxy Kaz243, are determined. It is conjectured that the latter galaxy has the properties of an Sy2 type galaxy. The electron densities and masses of the gaseous components are found for 15 galaxies, along with the masses of 8 galaxies for which the ratio M/L has been calculated. It is shown that the spectral structures of these galaxies do not depend on whether they are members of physical systems or are isolated.

  1. New galaxies with ultraviolet excess. I

    Energy Technology Data Exchange (ETDEWEB)

    Kazarian, M.A.

    1979-07-01

    A list is given of 136 galaxies with ultraviolet excess found with the 40-in. Schmidt telescope of the Byurakan Observatory with a 1.5-deg objective prism. Of these, 58 were observed at the primary focus of the 2.6-m telescope of the Byurakan Observatory, and 12 at the primary focus of the 6-m telescope of the Special Astronomical Observatory of the USSR Academy of Sciences. These observations and Palomar Sky Survey prints were used for a morphological description of the galaxies. Descriptions are given of the spectra of 17 galaxies obtained with the 6-m telescope of the Special Astronomical Observatory, the 2.6-m telescope of the Byurakan Observatory, and 90-, 107-, and 200-in. telescopes in the United States.

  2. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Directory of Open Access Journals (Sweden)

    Courtney A. Cunningham MD

    2015-09-01

    Full Text Available In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol.

  3. Faint Infrared-Excess Field Galaxies FROGs

    CERN Document Server

    Moustakas, L A; Zepf, S E; Bunker, A J

    1997-01-01

    Deep near-infrared and optical imaging surveys in the field reveal a curious population of galaxies that are infrared-bright (I-K>4), yet with relatively blue optical colors (V-I20, is high enough that if placed at z>1 as our models suggest, their space densities are about one-tenth of phi-*. The colors of these ``faint red outlier galaxies'' (fROGs) may derive from exceedingly old underlying stellar populations, a dust-embedded starburst or AGN, or a combination thereof. Determining the nature of these fROGs, and their relation with the I-K>6 ``extremely red objects,'' has implications for our understanding of the processes that give rise to infrared-excess galaxies in general. We report on an ongoing study of several targets with HST & Keck imaging and Keck/LRIS multislit spectroscopy.

  4. Cool WISPs for stellar cooling excesses

    CERN Document Server

    Giannotti, Maurizio; Redondo, Javier; Ringwald, Andreas

    2015-01-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  5. Cool WISPs for stellar cooling excesses

    Energy Technology Data Exchange (ETDEWEB)

    Giannotti, Maurizio [Barry Univ., Miami Shores, FL (United States). Physical Sciences; Irastorza, Igor [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Redondo, Javier [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Max-Planck-Institut fuer Physik, Muenchen (Germany); Ringwald, Andreas [DESY Hamburg (Germany). Theory Group

    2015-12-15

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  6. On dilatons and the LHC diphoton excess

    Science.gov (United States)

    Megías, Eugenio; Pujolàs, Oriol; Quirós, Mariano

    2016-05-01

    We study soft wall models that can embed the Standard Model and a naturally light dilaton. Exploiting the full capabilities of these models we identify the parameter space that allows to pass Electroweak Precision Tests with a moderate Kaluza-Klein scale, around 2 TeV. We analyze the coupling of the dilaton with Standard Model (SM) fields in the bulk, and discuss two applications: i) Models with a light dilaton as the first particle beyond the SM pass quite easily all observational tests even with a dilaton lighter than the Higgs. However the possibility of a 125 GeV dilaton as a Higgs impostor is essentially disfavored; ii) We show how to extend the soft wall models to realize a 750 GeV dilaton that could explain the recently reported diphoton excess at the LHC.

  7. Desaturation of excess intramyocellular triacylglycerol in obesity

    DEFF Research Database (Denmark)

    Haugaard, S B; Madsbad, S; Mu, Huiling;

    2010-01-01

    diabetes (T2DM), body mass index (BMI)=35.5+/-0.8 kg m(-2)) and 25 men, age 49+/-2 years (20 obese including 6 T2DM, BMI=35.8+/-0.8 kg m(-2))), IMTG FA composition was determined by gas-liquid chromatography after separation from phospholipids by thin-layer chromatography. RESULTS: Independently of gender......OBJECTIVE: Excess intramyocellular triacylglycerol (IMTG), found especially in obese women, is slowly metabolized and, therefore, prone to longer exposure to intracellular desaturases. Accordingly, it was hypothesized that IMTG content correlates inversely with IMTG fatty acid (FA) saturation...... in sedentary subjects. In addition, it was validated if IMTG palmitic acid is associated with insulin resistance as suggested earlier. DESIGN: Cross-sectional human study. SUBJECTS: In skeletal muscle biopsies, which were obtained from sedentary subjects (34 women, age 48+/-2 years (27 obese including 7 type 2...

  8. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  9. Vitamin paradox in obesity: Deficiency or excess?

    Science.gov (United States)

    Zhou, Shi-Sheng; Li, Da; Chen, Na-Na; Zhou, Yiming

    2015-08-25

    Since synthetic vitamins were used to fortify food and as supplements in the late 1930s, vitamin intake has significantly increased. This has been accompanied by an increased prevalence of obesity, a condition associated with diabetes, hypertension, cardiovascular disease, asthma and cancer. Paradoxically, obesity is often associated with low levels of fasting serum vitamins, such as folate and vitamin D. Recent studies on folic acid fortification have revealed another paradoxical phenomenon: obesity exhibits low fasting serum but high erythrocyte folate concentrations, with high levels of serum folate oxidation products. High erythrocyte folate status is known to reflect long-term excess folic acid intake, while increased folate oxidation products suggest an increased folate degradation because obesity shows an increased activity of cytochrome P450 2E1, a monooxygenase enzyme that can use folic acid as a substrate. There is also evidence that obesity increases niacin degradation, manifested by increased activity/expression of niacin-degrading enzymes and high levels of niacin metabolites. Moreover, obesity most commonly occurs in those with a low excretory reserve capacity (e.g., due to low birth weight/preterm birth) and/or a low sweat gland activity (black race and physical inactivity). These lines of evidence raise the possibility that low fasting serum vitamin status in obesity may be a compensatory response to chronic excess vitamin intake, rather than vitamin deficiency, and that obesity could be one of the manifestations of chronic vitamin poisoning. In this article, we discuss vitamin paradox in obesity from the perspective of vitamin homeostasis.

  10. Invisible excess of sense in social interaction.

    Science.gov (United States)

    Koubová, Alice

    2014-01-01

    The question of visibility and invisibility in social understanding is examined here. First, the phenomenological account of expressive phenomena and key ideas of the participatory sense-making theory are presented with regard to the issue of visibility. These accounts plead for the principal visibility of agents in interaction. Although participatory sense-making does not completely rule out the existence of opacity and invisible aspects of agents in interaction, it assumes the capacity of agents to integrate disruptions, opacity and misunderstandings in mutual modulation. Invisibility is classified as the dialectical counterpart of visibility, i.e., as a lack of sense whereby the dynamics of perpetual asking, of coping with each other and of improvements in interpretation are brought into play. By means of empirical exemplification this article aims at demonstrating aspects of invisibility in social interaction which complement the enactive interpretation. Without falling back into Cartesianism, it shows through dramaturgical analysis of a practice called "(Inter)acting with the inner partner" that social interaction includes elements of opacity and invisibility whose role is performative. This means that opacity is neither an obstacle to be overcome with more precise understanding nor a lack of meaning, but rather an excess of sense, a "hiddenness" of something real that has an "active power" (Merleau-Ponty). In this way it contributes to on-going social understanding as a hidden potentiality that naturally enriches, amplifies and in part constitutes human participation in social interactions. It is also shown here that this invisible excess of sense already functions on the level of self-relationship due to the essential self-opacity and self-alterity of each agent of social interaction. The analysis consequently raises two issues: the question of the enactive ethical stance toward the alterity of the other and the question of the autonomy of the self

  11. Estimating Cosmological Parameter Covariance

    CERN Document Server

    Taylor, Andy

    2014-01-01

    We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...

  12. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  13. The LHC diphoton excess as a W-ball

    CERN Document Server

    Arbuzov, B A

    2016-01-01

    We consider a possibility of the 750 GeV diphoton excess at the LHC to correspond to heavy $WW$ zero spin resonance. The resonance appears due to the would-be anomalous triple interaction of the weak bosons, which is defined by well-known coupling constant $\\lambda$. The $\\gamma\\gamma\\,\\,750\\, GeV$ anomaly may correspond to weak isotopic spin 0 pseudoscalar state. We obtain estimates for the effect, which qualitatively agree with ATLAS data. Effects are predicted in a production of $W^+ W^-, (Z,\\gamma) (Z,\\gamma)$ via resonance $X_{PS}$ with $M_{PS} \\simeq 750\\,GeV$, which could be reliably checked at the upgraded LHC at $\\sqrt{s}\\,=\\,13\\, TeV$. In the framework of an approach to the spontaneous generation of the triple anomalous interaction its coupling constant is estimated to be $\\lambda = -\\,0.020\\pm 0.005$ in an agreement with existing restrictions. Specific prediction of the hypothesis is the significant effect in decay channel $X_{PS} \\to \\gamma\\,l^+\\,l^-\\,(l = e,\\,\\mu)$, which branching ratio occurs t...

  14. Understanding the Code: keeping accurate records.

    Science.gov (United States)

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.

  15. A Comprehensive Census of Nearby Infrared Excess Stars

    Science.gov (United States)

    Cotten, Tara H.; Song, Inseok

    2016-07-01

    The conclusion of the Wide-Field Infrared Survey Explorer (WISE) mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as the James Webb Space Telescope. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and all-sky WISE (AllWISE) catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3σ or 5σ significance of excess in the mid- and far-infrared. Through procedures including spectral energy distribution fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 “Prime” infrared excess stars, of which 74 are new sources of excess, and >1200 are “Reserved” stars, of which 950 are new sources of excess. The main catalog of infrared excess stars are nearby, bright, and either demonstrate excess in more than one passband or have infrared spectroscopy confirming the infrared excess. This study identifies stars that display a spectral energy distribution suggestive of a secondary or post-protoplanetary generation of dust, and they are ideal targets for future optical and infrared imaging observations. The final catalogs of stars summarize the past work using infrared excess to detect dust disks, and with the most extensive compilation of infrared excess stars (˜1750) to date, we investigate various relationships among stellar and disk parameters.

  16. Mechanisms for Reduced Excess Sludge Production in the Cannibal Process.

    Science.gov (United States)

    Labelle, Marc-André; Dold, Peter L; Comeau, Yves

    2015-08-01

    Reducing excess sludge production is increasingly attractive as a result of rising costs and constraints with respect to sludge treatment and disposal. A technology in which the mechanisms remain not well understood is the Cannibal process, for which very low sludge yields have been reported. The objective of this work was to use modeling as a means to characterize excess sludge production at a full-scale Cannibal facility by providing a long sludge retention time and removing trash and grit by physical processes. The facility was characterized by using its historical data, from discussion with the staff and by conducting a sampling campaign to prepare a solids inventory and an overall mass balance. At the evaluated sludge retention time of 400 days, the sum of the daily loss of suspended solids to the effluent and of the waste activated sludge solids contributed approximately equally to the sum of solids that are wasted daily as trash and grit from the solids separation module. The overall sludge production was estimated to be 0.14 g total suspended solids produced/g chemical oxygen demand removed. The essential functions of the Cannibal process for the reduction of sludge production appear to be to remove trash and grit from the sludge by physical processes of microscreening and hydrocycloning, respectively, and to provide a long sludge retention time, which allows the slow degradation of the "unbiodegradable" influent particulate organics (XU,Inf) and the endogenous residue (XE). The high energy demand of 1.6 kWh/m³ of treated wastewater at the studied facility limits the niche of the Cannibal process to small- to medium-sized facilities in which sludge disposal costs are high but electricity costs are low.

  17. Retinoic Acid Excess Impairs Amelogenesis Inducing Enamel Defects

    Science.gov (United States)

    Morkmued, Supawich; Laugel-Haushalter, Virginie; Mathieu, Eric; Schuhbaur, Brigitte; Hemmerlé, Joseph; Dollé, Pascal; Bloch-Zupan, Agnès; Niederreither, Karen

    2017-01-01

    Abnormalities of enamel matrix proteins deposition, mineralization, or degradation during tooth development are responsible for a spectrum of either genetic diseases termed Amelogenesis imperfecta or acquired enamel defects. To assess if environmental/nutritional factors can exacerbate enamel defects, we investigated the role of the active form of vitamin A, retinoic acid (RA). Robust expression of RA-degrading enzymes Cyp26b1 and Cyp26c1 in developing murine teeth suggested RA excess would reduce tooth hard tissue mineralization, adversely affecting enamel. We employed a protocol where RA was supplied to pregnant mice as a food supplement, at a concentration estimated to result in moderate elevations in serum RA levels. This supplementation led to severe enamel defects in adult mice born from pregnant dams, with most severe alterations observed for treatments from embryonic day (E)12.5 to E16.5. We identified the enamel matrix proteins enamelin (Enam), ameloblastin (Ambn), and odontogenic ameloblast-associated protein (Odam) as target genes affected by excess RA, exhibiting mRNA reductions of over 20-fold in lower incisors at E16.5. RA treatments also affected bone formation, reducing mineralization. Accordingly, craniofacial ossification was drastically reduced after 2 days of treatment (E14.5). Massive RNA-sequencing (RNA-seq) was performed on E14.5 and E16.5 lower incisors. Reductions in Runx2 (a key transcriptional regulator of bone and enamel differentiation) and its targets were observed at E14.5 in RA-exposed embryos. RNA-seq analysis further indicated that bone growth factors, extracellular matrix, and calcium homeostasis were perturbed. Genes mutated in human AI (ENAM, AMBN, AMELX, AMTN, KLK4) were reduced in expression at E16.5. Our observations support a model in which elevated RA signaling at fetal stages affects dental cell lineages. Thereafter enamel protein production is impaired, leading to permanent enamel alterations. PMID:28111553

  18. Sonolência excessiva Excessive daytime sleepiness

    Directory of Open Access Journals (Sweden)

    Lia Rita Azeredo Bittencourt

    2005-05-01

    Full Text Available A sonolência é uma função biológica, definida como uma probabilidade aumentada para dormir. Já a sonolência excessiva (SE, ou hipersonia, refere-se a uma propensão aumentada ao sono com uma compulsão subjetiva para dormir, tirar cochilos involuntários e ataques de sono, quando o sono é inapropriado. As principais causas de sonolência excessiva são a privação crônica de sono (sono insuficiente, a Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono (SAHOS, a narcolepsia, a Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros (SPI/MPM, Distúrbios do Ritmo Circadiano, uso de drogas e medicações e a hipersonia idiopática. As principais conseqüências são prejuízo no desempenho nos estudos, no trabalho, nas relações familiares e sociais, alterações neuropsicológicas e cognitivas e risco aumentado de acidentes. O tratamento da sonolência excessiva deve estar voltado para as causas específicas. Na privação voluntária do sono, aumentar o tempo de sono e higiene do sono, o uso do CPAP (Continuous Positive Airway Pressure na Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono, exercícios e agentes dopaminérgicos na Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros, fototerapia e melatonina nos Distúrbios do Ritmo Circadiano, retiradas de drogas que causam sonolência excessiva e uso de estimulantes da vigília.Sleepiness is a physiological function, and can be defined as increased propension to fall asleep. However, excessive sleepiness (ES or hypersomnia refer to an abnormal increase in the probability to fall asleep, to take involuntary naps, or to have sleep atacks, when sleep is not desired. The main causes of excessive sleepiness is chronic sleep deprivation, sleep apnea syndrome, narcolepsy, movement disorders during sleep, circadian sleep disorders, use of drugs and medications, or idiopathic hypersomnia. Social, familial, work, and cognitive impairment are among the consequences of

  19. Posterior fixation suture and convergence excess esotropia.

    Science.gov (United States)

    Steffen; Auffarth; Kolling

    1998-09-01

    The present study investigates the results of Cuppers' 'Fadenoperation' in patients with non-accommodative convergence excess esotropia. Particular attention is given to postoperative eye alignment at distance fixation. Group 1 (n=96) included patients with a 'normal' convergence excess. The manifest near angles (mean ET 16.73 degrees +/- 6.33 degrees, range 4 degrees -33 degrees ) were roughly twice the size of the distance angles (mean ET 6.50 degrees +/- 3.62 degrees, range 0 degrees -14 degrees ). These patients were treated with a bilateral fadenoperation of the medial recti without additional eye muscle surgery. Three months after surgery, the mean postoperative angles were XT 0.5 degrees +/- 3.3 degrees (range XT 11 degrees -ET 5 degrees ) for distance fixation, and ET 2.7 degrees +/- 3.6 degrees (range XT 5 degrees -ET 14 degrees ) for near fixation, respectively. Postoperative convergent angles at near fixation >ET 10 degrees were present in two patients (1.9%). Group 2 (n=21) included patients with a mean preoperative distance angle of ET 9.2 degrees +/- 3.7 degrees (range 6 degrees -16 degrees ) and a mean preoperative near angle of ET 23.4 degrees +/- 3.1 degrees (range 16 degrees -31 degrees ). These patients were operated on with a bilateral fadenoperation of the medial recti and a simultaneous recession of one or both medial rectus muscles. Mean postoperative angles were XT 0.5 degrees +/- 4.6 degrees (range XT 12 degrees -ET 7 degrees ) for distance fixation and ET 1.4 degrees +/- 4.5 degrees (range XT 8 degrees -ET 13 degrees ) for near fixation, respectively. In this group, 2 patients (10.6%) had a postoperative exotropia >XT 5 degrees at distance fixation, and two patients had residual esotropia>ET 10 degrees at near fixation. Group 3 (n=17) included patients with a pronounced non-accommodative convergence excess. Near angle values (mean of 17.8 degrees +/- 5.3 degrees, range ET 7 degrees -26 degrees ) were several times higher than the distance

  20. On the Fluctuation Induced Excess Conductivity in Stainless Steel Sheathed MgB2 Tapes

    Directory of Open Access Journals (Sweden)

    Suchitra Rajput

    2013-01-01

    Full Text Available We report on the analyses of fluctuation induced excess conductivity in the - behavior in the in situ prepared MgB2 tapes. The scaling functions for critical fluctuations are employed to investigate the excess conductivity of these tapes around transition. Two scaling models for excess conductivity in the absence of magnetic field, namely, first, Aslamazov and Larkin model, second, Lawrence and Doniach model, have been employed for the study. Fitting the experimental - data with these models indicates the three-dimensional nature of conduction of the carriers as opposed to the 2D character exhibited by the HTSCs. The estimated amplitude of coherence length from the fitted model is ~21 Å.

  1. Misperceived pre-pregnancy body weight status predicts excessive gestational weight gain: findings from a US cohort study

    Directory of Open Access Journals (Sweden)

    Rifas-Shiman Sheryl L

    2008-12-01

    Full Text Available Abstract Background Excessive gestational weight gain promotes poor maternal and child health outcomes. Weight misperception is associated with weight gain in non-pregnant women, but no data exist during pregnancy. The purpose of this study was to examine the association of misperceived pre-pregnancy body weight status with excessive gestational weight gain. Methods At study enrollment, participants in Project Viva reported weight, height, and perceived body weight status by questionnaire. Our study sample comprised 1537 women who had either normal or overweight/obese pre-pregnancy BMI. We created 2 categories of pre-pregnancy body weight status misperception: normal weight women who identified themselves as overweight ('overassessors' and overweight/obese women who identified themselves as average or underweight ('underassessors'. Women who correctly perceived their body weight status were classified as either normal weight or overweight/obese accurate assessors. We performed multivariable logistic regression to determine the odds of excessive gestational weight gain according to 1990 Institute of Medicine guidelines. Results Of the 1029 women with normal pre-pregnancy BMI, 898 (87% accurately perceived and 131 (13% overassessed their weight status. 508 women were overweight/obese, of whom 438 (86% accurately perceived and 70 (14% underassessed their pre-pregnancy weight status. By the end of pregnancy, 823 women (54% gained excessively. Compared with normal weight accurate assessors, the adjusted odds of excessive gestational weight gain was 2.0 (95% confidence interval [CI]: 1.3, 3.0 in normal weight overassessors, 2.9 (95% CI: 2.2, 3.9 in overweight/obese accurate assessors, and 7.6 (95% CI: 3.4, 17.0 in overweight/obese underassessors. Conclusion Misperceived pre-pregnancy body weight status was associated with excessive gestational weight gain among both normal weight and overweight/obese women, with the greatest likelihood of excessive

  2. The Neurometabolic Fingerprint of Excessive Alcohol Drinking

    Science.gov (United States)

    Meinhardt, Marcus W; Sévin, Daniel C; Klee, Manuela L; Dieter, Sandra; Sauer, Uwe; Sommer, Wolfgang H

    2015-01-01

    ‘Omics' techniques are widely used to identify novel mechanisms underlying brain function and pathology. Here we applied a novel metabolomics approach to further ascertain the role of frontostriatal brain regions for the expression of addiction-like behaviors in rat models of alcoholism. Rats were made alcohol dependent via chronic intermittent alcohol vapor exposure. Following a 3-week abstinence period, rats had continuous access to alcohol in a two-bottle, free-choice paradigm for 7 weeks. Nontargeted flow injection time-of-flight mass spectrometry was used to assess global metabolic profiles of two cortical (prelimbic and infralimbic) and two striatal (accumbens core and shell) brain regions. Alcohol consumption produces pronounced global effects on neurometabolomic profiles leading to a clear separation of metabolic phenotypes between treatment groups, particularly. Further comparisons of regional tissue levels of various metabolites, most notably dopamine and Met-enkephalin, allow the extrapolation of alcohol consumption history. Finally, a high-drinking metabolic fingerprint was identified indicating a distinct alteration of central energy metabolism in the accumbens shell of excessively drinking rats that could indicate a so far unrecognized pathophysiological mechanism in alcohol addiction. In conclusion, global metabolic profiling from distinct brain regions by mass spectrometry identifies profiles reflective of an animal's drinking history and provides a versatile tool to further investigate pathophysiological mechanisms in alcohol dependence. PMID:25418809

  3. Quirky Explanations for the Diphoton Excess

    CERN Document Server

    Curtin, David

    2015-01-01

    We propose two simple quirk models to explain the recently reported 750 GeV diphoton excesses at ATLAS and CMS. It is already well-known that a real singlet scalar $\\phi$ with Yukawa couplings $\\phi \\bar X X$ to vector-like fermions $X$ with mass $m_X > m_\\phi/2$ can easily explain the observed signal, provided $X$ carries both SM color and electric charge. We instead consider first the possibility that the pair production of a fermion, charged under both SM gauge groups and a confining $SU(3)_v$ gauge group, is responsible. If pair produced it forms a quirky bound state, which promptly annihilates into gluons, photons, or v-gluons. This has the advantage of being able to explain a sizable width for the diphoton resonance, but is already in some tension with existing displaced searches and dijet resonance bounds. We therefore propose a hybrid Quirk-Scalar model, in which the fermion of the simple $\\phi \\bar X X$ toy model is charged under the additional $SU(3)_v$ confining gauge group. Constraints on the new ...

  4. Vergence adaptation in subjects with convergence excess.

    Science.gov (United States)

    Nilsson, Maria; Brautaset, Rune L

    2011-03-01

    The main purpose of this study was to evaluate the vergence adaptive ability in subjects diagnosed with convergence excess (CE) phoria (ie, subjects with an esophoric shift from distance to near but without an intermittent tropia at near). Vergence adaptation was measured at far and near with both base-in and base-out prisms using a "flashed" Maddox rod technique in 20 control subjects and 16 subjects with CE. In addition, accommodative adaptation and the stimulus AC/A and CA/C cross-links were measured. The AC/A and CA/C ratios were found to be high and low, respectively, and accommodative adaptation was found to be reduced in CE subjects as compared with the controls (P<0.005), all as predicted by the present theory. However, vergence adaptive ability was found to be reduced in the CE subjects at both distance and near and in response to both base-in and base-out prisms (P=0.002). This finding is not in accordance with and is difficult to reconcile with the present theory of CE.

  5. Armodafinil in the treatment of excessive sleepiness.

    Science.gov (United States)

    Rosenberg, Russell; Bogan, Richard

    2010-01-01

    Excessive sleepiness (ES) is a widespread condition, commonly the result of a sleep/ wake disorder such as obstructive sleep apnea (OSA), shift-work disorder (SWD), or narcolepsy. ES poses significant health and safety concerns in patients. Numerous interventions are available to treat the underlying causes of ES and ES itself, including behavioral measures, mechanical devices, and pharmacologic agents. This review explores the evidence supporting the use of armodafinil to treat ES associated with OSA, SWD, and narcolepsy. Armodafinil is an oral non-amphetamine wake-promoting agent, the R-isomer of racemic modafinil. Armodafinil and modafinil share many clinical and pharmacologic properties and are distinct from central nervous system stimulants; however, the mechanisms of action of modafinil and armodafinil are poorly characterized. Compared with modafinil, the wake-promoting effects of armodafinil persist later in the day. It is for this reason that armodafinil may be a particularly appropriate therapy for patients with persistent ES due to OSA, SWD, or narcolepsy.

  6. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2014-04-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity (RH together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal timescales. Furthermore, we review arguments for an interpretation of long-term palaeoclimatic d changes in terms of moisture source temperature, and we conclude that there remains no sufficient evidence that would justify to neglect the influence of RH on such palaeoclimatic d variations. Hence, we suggest that either the interpretation of d variations in palaeorecords should be adapted to reflect climatic influences on RH during evaporation, in particular atmospheric circulation changes, or new arguments for an interpretation in terms of moisture source temperature will have to be provided based on future research.

  7. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available BACKGROUND: Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used. METHODS AND FINDINGS: Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility. CONCLUSIONS: Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations

  8. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions.

    Science.gov (United States)

    Khawam, Edward; Abiad, Bachir; Boughannam, Alaa; Saade, Joanna; Alameddine, Ramzi

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies of the fusional amplitudes. Our purpose is to show that numerous factors, other than anomalies in the AC/A ratio or anomalies in the fusional conv. or divergence amplitudes, can contaminate either the distance or the near deviations. This results in significant discrepancies between the distance and the near deviations despite a normal AC/A ratio and normal fusional amplitudes, leading to erroneous diagnoses and inappropriate treatment models.

  9. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions

    Directory of Open Access Journals (Sweden)

    Edward Khawam

    2015-01-01

    Full Text Available Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies of the fusional amplitudes. Our purpose is to show that numerous factors, other than anomalies in the AC/A ratio or anomalies in the fusional conv. or divergence amplitudes, can contaminate either the distance or the near deviations. This results in significant discrepancies between the distance and the near deviations despite a normal AC/A ratio and normal fusional amplitudes, leading to erroneous diagnoses and inappropriate treatment models.

  10. Di-photon excess at LHC and the gamma ray excess at the Galactic Centre

    Energy Technology Data Exchange (ETDEWEB)

    Hektor, Andi [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala pst. 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia)

    2016-07-25

    Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact of cosmic antiproton flux measurement by the AMS-02 experiment.

  11. Spectrum of excess mortality due to carbapenem-resistant Klebsiella pneumoniae infections.

    Science.gov (United States)

    Hauck, C; Cober, E; Richter, S S; Perez, F; Salata, R A; Kalayjian, R C; Watkins, R R; Scalera, N M; Doi, Y; Kaye, K S; Evans, S; Fowler, V G; Bonomo, R A; van Duin, D

    2016-06-01

    Patients infected or colonized with carbapenem-resistant Klebsiella pneumoniae (CRKp) are often chronically and acutely ill, which results in substantial mortality unrelated to infection. Therefore, estimating excess mortality due to CRKp infections is challenging. The Consortium on Resistance against Carbapenems in K. pneumoniae (CRACKLE) is a prospective multicenter study. Here, patients in CRACKLE were evaluated at the time of their first CRKp bloodstream infection (BSI), pneumonia or urinary tract infection (UTI). A control cohort of patients with CRKp urinary colonization without CRKp infection was constructed. Excess hospital mortality was defined as mortality in cases after subtracting mortality in controls. In addition, the adjusted hazard ratios (aHR) for time-to-hospital-mortality at 30 days associated with infection compared with colonization were calculated in Cox proportional hazard models. In the study period, 260 patients with CRKp infections were included in the BSI (90 patients), pneumonia (49 patients) and UTI (121 patients) groups, who were compared with 223 controls. All-cause hospital mortality in controls was 12%. Excess hospital mortality was 27% in both patients with BSI and those with pneumonia. Excess hospital mortality was not observed in patients with UTI. In multivariable analyses, BSI and pneumonia compared with controls were associated with aHR of 2.59 (95% CI 1.52-4.50, p pneumonia is associated with the highest excess hospital mortality. Patients with BSI have slightly lower excess hospital mortality rates, whereas excess hospital mortality was not observed in hospitalized patients with UTI.

  12. Submillimeter to centimeter excess emission from the Magellanic Clouds. II. On the nature of the excess

    CERN Document Server

    Bot, Caroline; Paradis, Déborah; Bernard, Jean-Philippe; Lagache, Guilaine; Israel, Frank P; Wall, William F

    2010-01-01

    Dust emission at submm to cm wavelengths is often simply the Rayleigh-Jeans tail of dust particles at thermal equilibrium and is used as a cold mass tracer in various environments including nearby galaxies. However, well-sampled spectral energy distributions of the nearby, star-forming Magellanic Clouds have a pronounced (sub-)millimeter excess (Israel et al., 2010). This study attempts to confirm the existence of such a millimeter excess above expected dust, free-free and synchrotron emission and to explore different possibilities for its origin. We model NIR to radio spectral energy distributions of the Magellanic Clouds with dust, free-free and synchrotron emission. A millimeter excess emission is confirmed above these components and its spectral shape and intensity are analysed in light of different scenarios: very cold dust, Cosmic Microwave Background (CMB) fluctuations, a change of the dust spectral index and spinning dust emission. We show that very cold dust or CMB fluctuations are very unlikely expl...

  13. On the incidence of \\textit{WISE} infrared excess among solar analog, twin and sibling stars

    CERN Document Server

    Costa, Antônio D; Leão, Izan C; Lima, José E; da Silva, Danielly Freire; de Freitas, Daniel B; De Medeiros, José R

    2016-01-01

    This study presents a search for IR excess in the 3.4, 4.6, 12 and 22 $\\mu$m bands in a sample of 216 targets, composed of solar sibling, twin and analog stars observed by the \\textit{WISE} mission. In general, an infrared excess suggests the existence of warm dust around a star. We detected 12 $\\mu$m and/or 22 $\\mu$m excesses at the 3$\\sigma$ level of confidence in five solar analog stars, corresponding to a frequency of 4.1 $\\%$ of the entire sample of solar analogs analyzed, and in one out of 29 solar sibling candidates, confirming previous studies. The estimation of the dust properties shows that the sources with infrared excesses possess circumstellar material with temperatures that, within the uncertainties, are similar to that of the material found in the asteroid belt in our solar system. No photospheric flux excess was identified at the W1 (3.4 $\\mu$m) and W2 (4.6 $\\mu$m) \\textit{WISE} bands, indicating that, in the majority of stars of the present sample, no detectable dust is generated. Interesting...

  14. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  16. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  17. Implication of zinc excess on soil health.

    Science.gov (United States)

    Wyszkowska, Jadwiga; Boros-Lajszner, Edyta; Borowik, Agata; Baćmaga, Małgorzata; Kucharski, Jan; Tomkiel, Monika

    2016-01-01

    This study was undertaken to evaluate zinc's influence on the resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease. The experiment was conducted in a greenhouse of the University of Warmia and Mazury (UWM) in Olsztyn, Poland. Plastic pots were filled with 3 kg of sandy loam with pHKCl - 7.0 each. The experimental variables were: zinc applied to soil at six doses: 100, 300, 600, 1,200, 2,400 and 4,800 mg of Zn(2+) kg(-1) in the form of ZnCl2 (zinc chloride), and species of plant: oat (Avena sativa L.) cv. Chwat and white mustard (Sinapis alba) cv. Rota. Soil without the addition of zinc served as the control. During the growing season, soil samples were subjected to microbiological analyses on experimental days 25 and 50 to determine the abundance of organotrophic bacteria, actinomyces and fungi, and the activity of dehydrogenases, catalase and urease, which provided a basis for determining the soil resistance index (RS). The physicochemical properties of soil were determined after harvest. The results of this study indicate that excessive concentrations of zinc have an adverse impact on microbial growth and the activity of soil enzymes. The resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease decreased with an increase in the degree of soil contamination with zinc. Dehydrogenases were most sensitive and urease was least sensitive to soil contamination with zinc. Zinc also exerted an adverse influence on the physicochemical properties of soil and plant development. The growth of oat and white mustard plants was almost completely inhibited in response to the highest zinc doses of 2,400 and 4,800 mg Zn(2+) kg(-1).

  18. Factors influencing excessive daytime sleepiness in adolescents

    Directory of Open Access Journals (Sweden)

    Thiago de Souza Vilela

    2016-04-01

    Full Text Available Abstract Objective: Sleep deprivation in adolescents has lately become a health issue that tends to increase with higher stress prevalence, extenuating routines, and new technological devices that impair adolescents' bedtime. Therefore, this study aimed to assess the excessive sleepiness frequency and the factors that might be associated to it in this population. Methods: The cross-sectional study analyzed 531 adolescents aged 10–18 years old from two private schools and one public school. Five questionnaires were applied: the Cleveland Adolescent Sleepiness Questionnaire; the Sleep Disturbance Scale for Children; the Brazilian Economic Classification Criteria; the General Health and Sexual Maturation Questionnaire; and the Physical Activity Questionnaire. The statistical analyses were based on comparisons between schools and sleepiness and non-sleepiness groups, using linear correlation and logistic regression. Results: Sleep deprivation was present in 39% of the adolescents; sleep deficit was higher in private school adolescents (p < 0.001, and there was a positive correlation between age and sleep deficit (p < 0.001; r = 0.337. Logistic regression showed that older age (p = 0.002; PR: 1.21 [CI: 1.07–1.36] and higher score level for sleep hyperhidrosis in the sleep disturbance scale (p = 0.02; PR: 1.16 [CI: 1.02–1.32] were risk factors for worse degree of sleepiness. Conclusions: Sleep deficit appears to be a reality among adolescents; the results suggest a higher prevalence in students from private schools. Sleep deprivation is associated with older age in adolescents and possible presence of sleep disorders, such as sleep hyperhidrosis.

  19. Sub-millimeter to centimeter excess emission from the Magellanic Clouds. I. Global spectral energy distribution

    CERN Document Server

    Israel, F P; Raban, D; Reach, W T; Bot, C; Oonk, J B R; Ysard, N; Bernard, J P

    2010-01-01

    In order to reconstruct the global SEDs of the Magellanic Clouds over eight decades in spectral range, we combined literature flux densities representing the entire LMC and SMC respectively, and complemented these with maps extracted from the WMAP and COBE databases covering the missing the 23--90 GHz (13--3.2 mm) and the poorly sampled 1.25--250 THz (240--1.25 micron). We have discovered a pronounced excess of emission from both Magellanic Clouds, but especially the SMC, at millimeter and sub-millimeter wavelengths. We also determined accurate thermal radio fluxes and very low global extinctions for both LMC and SMC. Possible explanations are briefly considered but as long as the nature of the excess emission is unknown, the total dust masses and gas-to-dust ratios of the Magellanic Clouds cannot reliably be determined.

  20. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  1. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  2. [Spectroscopy technique and ruminant methane emissions accurate inspecting].

    Science.gov (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun

    2009-03-01

    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  3. Excess entropy production in quantum system: Quantum master equation approach

    OpenAIRE

    Nakajima, Satoshi; Tokura, Yasuhiro

    2016-01-01

    For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess...

  4. 46 CFR 154.550 - Excess flow valve: Bypass.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Excess flow valve: Bypass. 154.550 Section 154.550... and Process Piping Systems § 154.550 Excess flow valve: Bypass. If the excess flow valve allowed under § 154.532(b) has a bypass, the bypass must be of 1.0 mm (0.0394 in.) or less in diameter. Cargo Hose...

  5. Excess Weapons Plutonium Immobilization in Russia

    Energy Technology Data Exchange (ETDEWEB)

    Jardine, L.; Borisov, G.B.

    2000-04-15

    The joint goal of the Russian work is to establish a full-scale plutonium immobilization facility at a Russian industrial site by 2005. To achieve this requires that the necessary engineering and technical basis be developed in these Russian projects and the needed Russian approvals be obtained to conduct industrial-scale immobilization of plutonium-containing materials at a Russian industrial site by the 2005 date. This meeting and future work will provide the basis for joint decisions. Supporting R&D projects are being carried out at Russian Institutes that directly support the technical needs of Russian industrial sites to immobilize plutonium-containing materials. Special R&D on plutonium materials is also being carried out to support excess weapons disposition in Russia and the US, including nonproliferation studies of plutonium recovery from immobilization forms and accelerated radiation damage studies of the US-specified plutonium ceramic for immobilizing plutonium. This intriguing and extraordinary cooperation on certain aspects of the weapons plutonium problem is now progressing well and much work with plutonium has been completed in the past two years. Because much excellent and unique scientific and engineering technical work has now been completed in Russia in many aspects of plutonium immobilization, this meeting in St. Petersburg was both timely and necessary to summarize, review, and discuss these efforts among those who performed the actual work. The results of this meeting will help the US and Russia jointly define the future direction of the Russian plutonium immobilization program, and make it an even stronger and more integrated Russian program. The two objectives for the meeting were to: (1) Bring together the Russian organizations, experts, and managers performing the work into one place for four days to review and discuss their work with each other; and (2) Publish a meeting summary and a proceedings to compile reports of all the excellent

  6. Analysis of factors associated with excess weight in school children

    Science.gov (United States)

    Pinto, Renata Paulino; Nunes, Altacílio Aparecido; de Mello, Luane Marques

    2016-01-01

    Abstract Objective: To determine the prevalence of overweight and obesity in schoolchildren aged 10 to 16 years and its association with dietary and behavioral factors. Methods: Cross-sectional study that evaluated 505 adolescents using a structured questionnaire and anthropometric data. The data was analyzed through the T Test for independent samples and Mann-Whitney Test to compare means and medians, respectively, and Chi2 Test for proportions. Prevalence ratio (RP) and the 95% confidence interval was used to estimate the degree of association between variables. The logistic regression was employed to adjust the estimates to confounding factors. The significance level of 5% was considered for all analysis. Results: Excess weight was observed in 30.9% of the schoolchildren: 18.2% of overweight and 12.7% of obesity. There was no association between weight alterations and dietary/behavioral habits in the bivariate and multivariate analyses. However, associations were observed in relation to gender. Daily consumption of sweets [PR=0.75 (0.64-0.88)] and soft drinks [PR=0.82 (0.70-0.97)] was less frequent among boys; having lunch daily was slightly more often reported by boys [OR=1.11 (1.02-1.22)]. Physical activity practice of (≥3 times/week) was more often mentioned by boys and the association measures disclosed two-fold more physical activity in this group [PR=2.04 (1.56-2.67)] when compared to girls. Approximately 30% of boys and 40% of girls stated they did not perform activities requiring energy expenditure during free periods, with boys being 32% less idle than girls [PR=0.68 (0.60-0.76)]. Conclusions: A high prevalence of both overweight and obesity was observed, as well as unhealthy habits in the study population, regardless of the presence of weight alterations. Health promotion strategies in schools should be encouraged, in order to promote healthy habits and behaviors among all students. PMID:27321919

  7. Clinical Impact of Antimicrobial Resistance in European Hospitals : Excess Mortality and Length of Hospital Stay Related to Methicillin-Resistant Staphylococcus aureus Bloodstream Infections

    NARCIS (Netherlands)

    de Kraker, Marlieke E. A.; Wolkewitz, Martin; Davey, Peter G.; Grundmann, Hajo

    2011-01-01

    Antimicrobial resistance is threatening the successful management of nosocomial infections worldwide. Despite the therapeutic limitations imposed by methicillin-resistant Staphylococcus aureus (MRSA), its clinical impact is still debated. The objective of this study was to estimate the excess mortal

  8. Excessive erythrocytosis, chronic mountain sickness, and serum cobalt levels.

    Science.gov (United States)

    Jefferson, J Ashley; Escudero, Elizabeth; Hurtado, Maria-Elena; Pando, Jacqueline; Tapia, Rosario; Swenson, Erik R; Prchal, Josef; Schreiner, George F; Schoene, Robert B; Hurtado, Abdias; Johnson, Richard J

    2002-02-01

    In a subset of high-altitude dwellers, the appropriate erythrocytotic response becomes excessive and can result in chronic mountain sickness. We studied men with (study group) and without excessive erythrocytosis (packed-cell volume >65%) living in Cerro de Pasco, Peru (altitude 4300 m), and compared them with controls living in Lima, Peru (at sea-level). Toxic serum cobalt concentrations were detected in 11 of 21 (52%) study participants with excessive erythrocytosis, but were undetectable in high altitude or sea-level controls. In the mining community of Cerro de Pasco, cobalt toxicity might be an important contributor to excessive erythrocytosis.

  9. Premium subsidies for health insurance: excessive coverage vs. adverse selection.

    Science.gov (United States)

    Selden, T M

    1999-12-01

    The tax subsidy for employment-related health insurance can lead to excessive coverage and excessive spending on medical care. Yet, the potential also exists for adverse selection to result in the opposite problem-insufficient coverage and underconsumption of medical care. This paper uses the model of Rothschild and Stiglitz (R-S) to show that a simple linear premium subsidy can correct market failure due to adverse selection. The optimal linear subsidy balances welfare losses from excessive coverage against welfare gains from reduced adverse selection. Indeed, a capped premium subsidy may mitigate adverse selection without creating incentives for excessive coverage.

  10. A Comprehensive Census of Nearby Infrared Excess Stars

    CERN Document Server

    Cotten, Tara H

    2016-01-01

    The conclusion of the WISE mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as JWST. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and AllWISE catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3$\\sigma$ or 5$\\sigma$ significance of excess in the mid- and far-infrared. Through procedures including SED fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false-positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 `Prime' infrared excess stars and $\\geq$1200 `Reserved' stars. The main catalog of infrared excess stars are nearby, b...

  11. Excess relative risk of solid cancer mortality after prolonged exposure to naturally occurring high background radiation in Yangjiang, China

    Energy Technology Data Exchange (ETDEWEB)

    Sun Quanfu; Tao Zufan [Ministry of Health, Beijing (China). Lab. of Industrial Hygiene; Akiba, Suminori (and others)

    2000-10-01

    A study was made on cancer mortality in the high-background radiation areas of Yangjiang, China. Based on hamlet-specific environmental doses and sex- and age-specific occupancy factors, cumulative doses were calculated for each subject. In this article, we describe how the indirect estimation was made on individual dose and the methodology used to estimate radiation risk. Then, assuming a linear dose response relationship and using cancer mortality data for the period 1979-1995, we estimate the excess relative risk per Sievert for solid cancer to be -0.11 (95% CI, -0.67, 0.69). Also, we estimate the excess relative risks of four leading cancers in the study areas, i.e., cancers of the liver, nasopharynx, lung and stomach. In addition, we evaluate the effects of possible bias on our risk estimation. (author)

  12. Excess enthalpy, density, and speed of sound determination for the ternary mixture (methyl tert-butyl ether + 1-butanol + n-hexane)

    Energy Technology Data Exchange (ETDEWEB)

    Mascato, Eva [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Mariano, Alejandra [Laboratorio de Fisicoquimica, Departamento de Quimica, Facultad de Ingenieria, Universidad Nacional del Comahue, 8300 Neuquen (Argentina); Pineiro, Manuel M. [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain)], E-mail: mmpineiro@uvigo.es; Legido, Jose Luis [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Paz Andrade, M.I. [Departamento de Fisica Aplicada, Facultade de Fisica, Universidade de Santiago de Compostela, E-15706 Santiago de Compostela (Spain)

    2007-09-15

    Density, ({rho}), and speed of sound, (u), from T = 288.15 to T = 308.15 K, and excess molar enthalpies, (h{sup E}) at T = 298.15 K, have been measured over the entire composition range for (methyl tert-butyl ether + 1-butanol + n-hexane). In addition, excess molar volumes, V{sup E}, and excess isentropic compressibility, {kappa}{sub s}{sup E}, were calculated from experimental data. Finally, experimental excess enthalpies results are compared with the estimations obtained by applying the group-contribution models of UNIFAC (in the versions of Dang and Tassios, Larsen et al., Gmehling et al.), and DISQUAC.

  13. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  14. Estimation of physical parameters in induction motors

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  15. Excess chemical potential of small solutes across water--membrane and water--hexane interfaces

    Science.gov (United States)

    Pohorille, A.; Wilson, M. A.

    1996-01-01

    The excess chemical potentials of five small, structurally related solutes, CH4, CH3F, CH2F2, CHF3, and CF4, across the water-glycerol 1-monooleate bilayer and water-hexane interfaces were calculated at 300, 310, and 340 K using the particle insertion method. The excess chemical potentials of nonpolar molecules (CH4 and CF4) decrease monotonically or nearly monotonically from water to a nonpolar phase. In contrast, for molecules that possess permanent dipole moments (CH3F, CH2F, and CHF3), the excess chemical potentials exhibit an interfacial minimum that arises from superposition of two monotonically and oppositely changing contributions: electrostatic and nonelectrostatic. The nonelectrostatic term, dominated by the reversible work of creating a cavity that accommodates the solute, decreases, whereas the electrostatic term increases across the interface from water to the membrane interior. In water, the dependence of this term on the dipole moment is accurately described by second order perturbation theory. To achieve the same accuracy at the interface, third order terms must also be included. In the interfacial region, the molecular structure of the solvent influences both the excess chemical potential and solute orientations. The excess chemical potential across the interface increases with temperature, but this effect is rather small. Our analysis indicates that a broad range of small, moderately polar molecules should be surface active at the water-membrane and water-oil interfaces. The biological and medical significance of this result, especially in relation to the mechanism of anesthetic action, is discussed.

  16. How dusty is alpha Centauri? Excess or non-excess over the infrared photospheres of main-sequence stars

    CERN Document Server

    Wiegert, J; Thébault, P; Olofsson, G; Mora, A; Bryden, G; Marshall, J P; Eiroa, C; Montesinos, B; Ardila, D; Augereau, J C; Aran, A Bayo; Danchi, W C; del Burgo, C; Ertel, S; Fridlund, M C W; Hajigholi, M; Krivov, A V; Pilbratt, G L; Roberge, A; White, G J

    2014-01-01

    [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses ...

  17. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  18. Accurate colorimetric feedback for RGB LED clusters

    Science.gov (United States)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  19. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  20. A Unified Theory of Rainfall Extremes, Rainfall Excesses, and IDF Curves

    Science.gov (United States)

    Veneziano, D.; Yoon, S.

    2012-04-01

    Extreme rainfall events are a key component of hydrologic risk management and design. Yet, a consistent mathematical theory of such extremes remains elusive. This study aims at laying new statistical foundations for such a theory. The quantities of interest are the distribution of the annual maximum, the distribution of the excess above a high threshold z, and the intensity-duration-frequency (IDF) curves. Traditionally, the modeling of annual maxima and excesses is based on extreme value (EV) and extreme excess (EE) theories. These theories establish that the maximum of n iid variables is attracted as n →∞ to a generalized extreme value (GEV) distribution with a certain index k and the distribution of the excess is attracted as z →∞ to a generalized Pareto distribution with the same index. The empirical value of k tends to decrease as the averaging duration d increases. To a first approximation, the IDF intensities scale with d and the return period T . Explanations for this approximate scaling behavior and theoretical predictions of the scaling exponents have emerged over the past few years. This theoretical work has been largely independent of that on the annual maxima and the excesses. Deviations from exact scaling include a tendency of the IDF curves to converge as d and T increase. To bring conceptual clarity and explain the above observations, we analyze the extremes of stationary multifractal measures, which provide good representations of rainfall within storms. These extremes follow from large deviation theory rather than EV/EE theory. A unified framework emerges that (a) encompasses annual maxima, excesses and IDF values without relying on EV or EE asymptotics, (b) predicts the index k and the IDF scaling exponents, (c) explains the dependence of k on d and the deviations from exact scaling of the IDF curves, and (d) explains why the empirical estimates of k tend to be positive (in the Frechet range) while, based on frequently assumed marginal

  1. Trends in the prevalence of excess dietary sodium intake - United States, 2003-2010.

    Science.gov (United States)

    2013-12-20

    Excess sodium intake can lead to hypertension, the primary risk factor for cardiovascular disease, which is the leading cause of U.S. deaths. Monitoring the prevalence of excess sodium intake is essential to provide the evidence for public health interventions and to track reductions in sodium intake, yet few reports exist. Reducing population sodium intake is a national priority, and monitoring the amount of sodium consumed adjusted for energy intake (sodium density or sodium in milligrams divided by calories) has been recommended because a higher sodium intake is generally accompanied by a higher calorie intake from food. To describe the most recent estimates and trends in excess sodium intake, CDC analyzed 2003-2010 data from the National Health and Nutrition Examination Survey (NHANES) of 34,916 participants aged ≥1 year. During 2007-2010, the prevalence of excess sodium intake, defined as intake above the Institute of Medicine tolerable upper intake levels (1,500 mg/day at ages 1-3 years; 1,900 mg at 4-8 years; 2,200 mg at 9-13 years; and 2,300 mg at ≥14 years) (3), ranged by age group from 79.1% to 95.4%. Small declines in the prevalence of excess sodium intake occurred during 2003-2010 in children aged 1-13 years, but not in adolescents or adults. Mean sodium intake declined slightly among persons aged ≥1 year, whereas sodium density did not. Despite slight declines in some groups, the majority of the U.S. population aged ≥1 year consumes excess sodium.

  2. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  3. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  4. Accurate Control of Josephson Phase Qubits

    Science.gov (United States)

    2016-04-14

    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  5. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  6. Synthesizing Accurate Floating-Point Formulas

    OpenAIRE

    Ioualalen, Arnault; Martel, Matthieu

    2013-01-01

    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...

  7. Excess of {sup 236}U in the northwest Mediterranean Sea

    Energy Technology Data Exchange (ETDEWEB)

    Chamizo, E., E-mail: echamizo@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); López-Lora, M., E-mail: mlopezlora@us.es [Centro Nacional de Aceleradores, Universidad de Sevilla, Consejo Superior de Investigaciones Científicas, Junta de Andalucía, Thomas Alva Edison 7, 41092 Seville (Spain); Bressac, M., E-mail: matthieu.bressac@utas.edu.au [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Institute for Marine and Antarctic Studies, University of Tasmania, Hobart, TAS (Australia); Levy, I., E-mail: I.N.Levy@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco); Pham, M.K., E-mail: M.Pham@iaea.org [IAEA-Environment Laboratories, Monte Carlo 98000 (Monaco)

    2016-09-15

    In this work, we present first {sup 236}U results in the northwestern Mediterranean. {sup 236}U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25′N, 07°52′E). The obtained {sup 236}U/{sup 238}U atom ratios in the dissolved phase, ranging from about 2 × 10{sup −9} at 100 m depth to about 1.5 × 10{sup −9} at 2350 m depth, indicate that anthropogenic {sup 236}U dominates the whole seawater column. The corresponding deep-water column inventory (12.6 ng/m{sup 2} or 32.1 × 10{sup 12} atoms/m{sup 2}) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5 ng/m{sup 2} or 13 × 10{sup 12} atoms/m{sup 2}), evidencing the influence of local or regional {sup 236}U sources in the western Mediterranean basin. On the other hand, the input of {sup 236}U associated to Saharan dust outbreaks is evaluated. An additional {sup 236}U annual deposition of about 0.2 pg/m{sup 2} based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that {sup 236}U stays in solution in seawater. Overall, this source accounts for about 0.1% of the {sup 236}U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin. - Highlights: • First {sup 236}U results in the northwest Mediterranean Sea are reported. • Anthropogenic {sup 236}U dominates the whole seawater column at DYFAMED station. • {sup 236}U deep-water column inventory exceeds by a factor of 2.5 the global fallout one. • Saharan dust intrusions are responsible for an annual

  8. 24mum excesses of hot WDs - Evidence of dust disks?

    Energy Technology Data Exchange (ETDEWEB)

    Bilikova, Jana; Chu, Y-H; Gruendl, Robert [Astronomy Department, University of Illinois, 1002 W. Green St., Urbana, IL 61801 (United States); Su, Kate [Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tuscon, AZ 85721 (United States); Rauch, Thomas [Institute for Astronomy and Astrophysics, Kepler Center for Astro and Particle Physics, Eberhard Karls University, Tuebingen (Germany); Marco, Orsola De [American Museum of Natural History, Department of Astrophysics, Central Park West at 79th St., New York, NY 10024 (United States); Volk, Kevin, E-mail: jbiliko2@astro.uiuc.ed [Gemini Observatory, Northers Operations Center, 670 N. A ohoku Place, Hilo, HI 96720 (United States)

    2009-06-01

    Spitzer Space Telescope observations of the Helix Nebula's hot (T{sub eff} approx 110 000 K) central star revealed mid-IR excess emission consistent with a continuum emission from a dust disk located at 35-150 AU from the central white dwarf (WD), and the dust is most likely produced by collisions among Kuiper Belt-like objects (Su et al. 2007). To determine how common such dust disks are, we have carried out a Spitzer 24 mum survey of 72 hot WDs, and detected at least 7 WDs that exhibit clear IR excess, all of them still surrounded by planetary nebulae (PNe). Inspired by the prevalence of PN environment for hot WDs showing IR excesses, we have surveyed the Spitzer archive for more central stars of PN (CSPNs) with IR excesses; the search yields four cases in which CSPNs show excesses in 3.6-8.0 mum, and one additional case of 24 mum excess. We present the results of these two searches for dust-disk candidates, and discuss scenarios other than KBO collisions that need to be considered in explaining the observed near and/or mid-IR excess emission. These scenarios include unresolved companions, binary post-AGB evolution, and unresolved compact nebulosity. We describe planned follow-up observations aiming to help us distinguish between different origins of observed IR excesses.

  9. 12 CFR 740.3 - Advertising of excess insurance.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Advertising of excess insurance. 740.3 Section... ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.3 Advertising of excess insurance. Any advertising that mentions share or savings account insurance provided by a party other than the NCUA...

  10. On Infrared Excesses Associated With Li-Rich K Giants

    CERN Document Server

    Rebull, Luisa M; Gibbs, John C; Deeb, J Elin; Larsen, Estefania; Black, David V; Altepeter, Shailyn; Bucksbee, Ethan; Cashen, Sarah; Clarke, Matthew; Datta, Ashwin; Hodgson, Emily; Lince, Megan

    2015-01-01

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using IRAS data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from WISE. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. We have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ~20 um (with possible excesses for 2 additional ...

  11. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its danger

  12. Teachers' Knowledge of Anxiety and Identification of Excessive Anxiety in

    Science.gov (United States)

    Headley, Clea; Campbell, Marilyn A.

    2013-01-01

    This study examined primary school teachers' knowledge of anxiety and excessive anxiety symptoms in children. Three hundred and fifteen primary school teachers completed a questionnaire exploring their definitions of anxiety and the indications they associated with excessive anxiety in primary school children. Results showed that teachers had an…

  13. 26 CFR 1.162-8 - Treatment of excessive compensation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Treatment of excessive compensation. 1.162-8...-8 Treatment of excessive compensation. The income tax liability of the recipient in respect of an amount ostensibly paid to him as compensation, but not allowed to be deducted as such by the payor,...

  14. Hydration of proteins: excess partial volumes of water and proteins.

    Science.gov (United States)

    Sirotkin, Vladimir A; Komissarov, Igor A; Khadiullina, Aigul V

    2012-04-05

    High precision densitometry was applied to study the hydration of proteins. The hydration process was analyzed by the simultaneous monitoring of the excess partial volumes of water and the proteins in the entire range of water content. Five unrelated proteins (lysozyme, chymotrypsinogen A, ovalbumin, human serum albumin, and β-lactoglobulin) were used as models. The obtained data were compared with the excess partial enthalpies of water and the proteins. It was shown that the excess partial quantities are very sensitive to the changes in the state of water and proteins. At the lowest water weight fractions (w(1)), the changes of the excess functions can mainly be attributed to water addition. A transition from the glassy to the flexible state of the proteins is accompanied by significant changes in the excess partial quantities of water and the proteins. This transition appears at a water weight fraction of 0.06 when charged groups of proteins are covered. Excess partial quantities reach their fully hydrated values at w(1) > 0.5 when coverage of both polar and weakly interacting surface elements is complete. At the highest water contents, water addition has no significant effect on the excess quantities. At w(1) > 0.5, changes in the excess functions can solely be attributed to changes in the state of the proteins.

  15. 19 CFR 10.625 - Refunds of excess customs duties.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Refunds of excess customs duties. 10.625 Section 10.625 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... and Apparel Goods § 10.625 Refunds of excess customs duties. (a) Applicability. Section 205 of...

  16. 41 CFR 101-27.103 - Acquisition of excess property.

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT 27.1-Stock Replenishment § 101-27.103 Acquisition of excess property. Except for inventories... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Acquisition of excess property. 101-27.103 Section 101-27.103 Public Contracts and Property Management Federal...

  17. 30 CFR 75.323 - Actions for excessive methane.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Actions for excessive methane. 75.323 Section... excessive methane. (a) Location of tests. Tests for methane concentrations under this section shall be made.... (1) When 1.0 percent or more methane is present in a working place or an intake air course,...

  18. Aerophagia : Excessive Air Swallowing Demonstrated by Esophageal Impedance Monitoring

    NARCIS (Netherlands)

    Hemmink, Gerrit J. M.; Weusten, Bas L. A. M.; Bredenoord, Albert J.; Timmer, Robin; Smout, Andre J. P. M.

    2009-01-01

    BACKGROUND & AIMS: Patients with aerophagia suffer from the presence of an excessive volume of intestinal gas, which is thought to result from excessive air ingestion. However, this has not been shown thus far. The aim of this study was therefore to assess swallowing and air swallowing frequencies i

  19. Conversion Excess Coal Gas to Dimethyl Ether in Steel Works

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the technical progress of metallurgical industry, more excess gas will be produced in steel works. The feasibility of producing dimethyl ether by gas synthesis was discussed, which focused on marketing, energy balance, process design, economic evaluation, and environmental protection etc. DME was considered to be a new way to utilize excess coal gas in steel works.

  20. The Role of Alcohol Advertising in Excessive and Hazardous Drinking.

    Science.gov (United States)

    Atkin, Charles K.; And Others

    1983-01-01

    Examined the influence of advertising on excessive and dangerous drinking in a survey of 1,200 adolescents and young adults who were shown advertisements depicting excessive consumption themes. Results indicated that advertising stimulates consumption levels, which leads to heavy drinking and drinking in dangerous situations. (JAC)

  1. A Practical Approach For Excess Bandwidth Distribution for EPONs

    KAUST Repository

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  2. ON INFRARED EXCESSES ASSOCIATED WITH Li-RICH K GIANTS

    Energy Technology Data Exchange (ETDEWEB)

    Rebull, Luisa M. [Spitzer Science Center (SSC) and Infrared Science Archive (IRSA), Infrared Processing and Analysis Center - IPAC, 1200 E. California Blvd., California Institute of Technology, Pasadena, CA 91125 (United States); Carlberg, Joleen K. [NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Gibbs, John C.; Cashen, Sarah; Datta, Ashwin; Hodgson, Emily; Lince, Megan [Glencoe High School, 2700 NW Glencoe Rd., Hillsboro, OR 97124 (United States); Deeb, J. Elin [Bear Creek High School, 9800 W. Dartmouth Pl., Lakewood, CO 80227 (United States); Larsen, Estefania; Altepeter, Shailyn; Bucksbee, Ethan; Clarke, Matthew [Millard South High School, 14905 Q St., Omaha, NE 68137 (United States); Black, David V., E-mail: rebull@ipac.caltech.edu [Walden School of Liberal Arts, 4230 N. University Ave., Provo, UT 84604 (United States)

    2015-10-15

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and Li abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be Li-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ∼20 μm (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few Li-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, {sup 12}C/{sup 13}C. IR excesses by 20 μm, though relatively rare, are at least twice as common among our sample of Li-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported by theoretical calculations. Conversely, the

  3. On Infrared Excesses Associated with Li-Rich K Giants

    Science.gov (United States)

    Rebull, Luisa M.; Carlberg, Joleen K.; Gibbs, John C.; Deeb, J. Elin; Larsen, Estefania; Black, David V.; Altepeter, Shailyn; Bucksbee, Ethan; Cashen, Sarah; Clarke, Matthew; Datta, Ashwin; Hodgson, Emily; Lince, Megan

    2015-01-01

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant lithium and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched lithium, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and lithium abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be lithium-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by approximately 20 micrometers (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few lithium-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, 12C/13C. IR excesses by 20 micrometers, though relatively rare, are at least twice as common among our sample of lithium-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported

  4. Advances in Derivative-Free State Estimation for Nonlinear Systems

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Poulsen, Niels Kjølstad; Ravn, Ole

    In this paper we show that it involves considerable advantages to use polynomial approximations obtained with an interpolation formula for derivation of state estimators for nonlinear systems. The estimators become more accurate than estimators based on Taylor approximations, and yet...

  5. Mechanisms linking excess adiposity and carcinogenesis promotion

    Directory of Open Access Journals (Sweden)

    Ana I. Pérez-Hernández

    2014-05-01

    Full Text Available Obesity constitutes one of the most important metabolic diseases being associated to insulin resistance development and increased cardiovascular risk. Association between obesity and cancer has also been well-established for several tumor types, such as breast cancer in postmenopausal women, colorectal and prostate cancer. Cancer is the first death cause in developed countries and the second one in developing countries, with high incidence rates around the world. Furthermore, it has been estimated that 15-20% of all cancer deaths may be attributable to obesity. Tumor growth is regulated by interactions between tumor cells and their tissue microenvironment. In this sense, obesity may lead to cancer development through dysfunctional adipose tissue and altered signaling pathways. In this review, three main pathways relating obesity and cancer development are examined: i inflammatory changes leading to macrophage polarization and altered adipokine profile; ii insulin resistance development; and iii adipose tissue hypoxia. Since obesity and cancer present a high prevalence, the association between these conditions is of great public health significance and studies showing mechanisms by which obesity lead to cancer development and progression are needed to improve prevention and management of these diseases.

  6. Warm Dust around Cool Stars: Field M Dwarfs with WISE 12 or 22 Micron Excess Emission

    CERN Document Server

    Theissen, Christopher A

    2014-01-01

    Using the SDSS DR7 spectroscopic catalog, we searched the WISE AllWISE catalog to investigate the occurrence of warm dust, as inferred from IR excesses, around field M dwarfs (dMs). We developed SDSS/WISE color selection criteria to identify 175 dMs (from 70,841) that show IR flux greater than typical dM photosphere levels at 12 and/or 22 $\\mu$m, including seven new stars within the Orion OB1 footprint. We characterize the dust populations inferred from each IR excess, and investigate the possibility that these excesses could arise from ultracool binary companions by modeling combined SEDs. Our observed IR fluxes are greater than levels expected from ultracool companions ($>3\\sigma$). We also estimate that the probability the observed IR excesses are due to chance alignments with extragalactic sources is $<$ 0.1%. Using SDSS spectra we measure surface gravity dependent features (K, Na, and CaH 3), and find $<$ 15% of our sample indicate low surface gravities. Examining tracers of youth (H$\\alpha$, UV fl...

  7. Which estimation is more accurate? A technical comments on Nature Paper by Liu et al on overestimation of China's emission%谁的估计更准确?评论Nature发表的中国CO2排放重估的论文

    Institute of Scientific and Technical Information of China (English)

    滕飞; 朱松丽

    2015-01-01

    从温室气体清单估计的方法、数据及不确定性等几个方面,对刘竹等2015年8月发表在Nature上的论文“Reduced carbon emission estimates from fossil fuel combustion and cement production in China”的主要结论及观点进行了分析,指出了该文在计算与比较中的错误,因而该文有关中国国家温室气体清单高估中国排放的结论并不成立.

  8. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  9. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  10. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  11. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  12. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  13. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    OpenAIRE

    Zhanshan Wang; Longhu Quan; Xiuchong Liu

    2014-01-01

    The control of a high performance alternative current (AC) motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI) controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM). In order to guarantee the accuracy of rot...

  14. Student estimations of peer alcohol consumption

    DEFF Research Database (Denmark)

    Stock, Christiane; Mcalaney, John; Pischke, Claudia

    2014-01-01

    : This article aims to discuss the link between the Social Norms Approach and the Health Promoting University, and analyse estimations of peer alcohol consumption among European university students. METHODS: A total of 4392 students from universities in six European countries and Turkey were asked to report...... their own typical alcohol consumption per day and to estimate the same for their peers of same sex. Students were classified as accurate or inaccurate estimators of peer alcohol consumption. Socio-demographic factors and personal alcohol consumption were examined as predictors for an accurate estimation...... their peers' alcohol consumption. Independent from these factors, students' accurate estimation of peers' drinking decreased significantly with increasing personal consumption. CONCLUSIONS AS ACCURATE ESTIMATES OF PEER ALCOHOL CONSUMPTION APPEAR TO AFFECT PERSONAL DRINKING BEHAVIOUR POSITIVELY, SOCIAL NORMS...

  15. A Bayesian Framework for Combining Valuation Estimates

    CERN Document Server

    Yee, Kenton K

    2007-01-01

    Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.

  16. Diagnostic accuracy of the defining characteristics of the excessive fluid volume diagnosis in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Maria Isabel da Conceição Dias Fernandes

    2015-12-01

    Full Text Available Objective: to evaluate the accuracy of the defining characteristics of the excess fluid volume nursing diagnosis of NANDA International, in patients undergoing hemodialysis. Method: this was a study of diagnostic accuracy, with a cross-sectional design, performed in two stages. The first, involving 100 patients from a dialysis clinic and a university hospital in northeastern Brazil, investigated the presence and absence of the defining characteristics of excess fluid volume. In the second step, these characteristics were evaluated by diagnostic nurses, who judged the presence or absence of the diagnosis. To analyze the measures of accuracy, sensitivity, specificity, and positive and negative predictive values were calculated. Approval was given by the Research Ethics Committee under authorization No. 148.428. Results: the most sensitive indicator was edema and most specific were pulmonary congestion, adventitious breath sounds and restlessness. Conclusion: the more accurate defining characteristics, considered valid for the diagnostic inference of excess fluid volume in patients undergoing hemodialysis were edema, pulmonary congestion, adventitious breath sounds and restlessness. Thus, in the presence of these, the nurse may safely assume the presence of the diagnosis studied.

  17. 3D maps of the local ISM from inversion of individual color excess measurements

    CERN Document Server

    Lallement, Rosine; Valette, Bernard; Puspitarini, Lucky; Eyer, Laurent; Casagrande, Luca

    2013-01-01

    Three-dimensional (3D) maps of the Galactic interstellar matter (ISM) are a potential tool of wide use, however accurate and detailed maps are still lacking. One of the ways to construct the maps is to invert individual distance-limited ISM measurements, a method we have here applied to measurements of stellar color excess in the optical. We have assembled color excess data together with the associated parallax or photometric distances to constitute a catalog of ~ 23,000 sightlines for stars within 2.5 kpc. The photometric data are taken from Stromgren catalogs, the Geneva photometric database, and the Geneva-Copenhagen survey. We also included extinctions derived towards open clusters. We applied, to this color excess dataset, an inversion method based on a regularized Bayesian approach, previously used for mapping at closer distances. We show the dust spatial distribution resulting from the inversion by means of planar cuts through the differential opacity 3D distribution, and by means of 2D maps of the int...

  18. The e-index, complementing the h-index for excess citations.

    Directory of Open Access Journals (Sweden)

    Chun-Ting Zhang

    Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.

  19. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  20. Excess of (236)U in the northwest Mediterranean Sea.

    Science.gov (United States)

    Chamizo, E; López-Lora, M; Bressac, M; Levy, I; Pham, M K

    2016-09-15

    In this work, we present first (236)U results in the northwestern Mediterranean. (236)U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25'N, 07°52'E). The obtained (236)U/(238)U atom ratios in the dissolved phase, ranging from about 2×10(-9) at 100m depth to about 1.5×10(-9) at 2350m depth, indicate that anthropogenic (236)U dominates the whole seawater column. The corresponding deep-water column inventory (12.6ng/m(2) or 32.1×10(12) atoms/m(2)) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5ng/m(2) or 13×10(12) atoms/m(2)), evidencing the influence of local or regional (236)U sources in the western Mediterranean basin. On the other hand, the input of (236)U associated to Saharan dust outbreaks is evaluated. An additional (236)U annual deposition of about 0.2pg/m(2) based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that (236)U stays in solution in seawater. Overall, this source accounts for about 0.1% of the (236)U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin.

  1. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  2. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  3. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  4. Accurate taxonomic assignment of short pyrosequencing reads.

    Science.gov (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  5. Hydration of proteins: excess partial enthalpies of water and proteins.

    Science.gov (United States)

    Sirotkin, Vladimir A; Khadiullina, Aigul V

    2011-12-22

    Isothermal batch calorimetry was applied to study the hydration of proteins. The hydration process was analyzed by the simultaneous monitoring of the excess partial enthalpies of water and the proteins in the entire range of water content. Four unrelated proteins (lysozyme, chymotrypsinogen A, human serum albumin, and β-lactoglobulin) were used as models. The excess partial quantities are very sensitive to the changes in the state of water and proteins. At the lowest water weight fractions (w(1)), the changes of the excess thermochemical functions can mainly be attributed to water addition. A transition from the glassy to the flexible state of the proteins is accompanied by significant changes in the excess partial quantities of water and the proteins. This transition appears at a water weight fraction of 0.06 when charged groups of proteins are covered. Excess partial quantities reach their fully hydrated values at w(1) > 0.5 when coverage of both polar and weakly interacting surface elements is complete. At the highest water contents, water addition has no significant effect on the excess thermochemical quantities. At w(1) > 0.5, changes in the excess functions can solely be attributed to changes in the state of the proteins.

  6. Prevalence of excessive screen time and associated factors in adolescents

    Science.gov (United States)

    de Lucena, Joana Marcela Sales; Cheng, Luanna Alexandra; Cavalcante, Thaísa Leite Mafaldo; da Silva, Vanessa Araújo; de Farias, José Cazuza

    2015-01-01

    Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female) from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color), physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1) and it was higher in males (84.3%) compared to females (76.1%; p<0.001). In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure. PMID:26298661

  7. Prevalence of excessive screen time and associated factors in adolescents

    Directory of Open Access Journals (Sweden)

    Joana Marcela Sales de Lucena

    2015-12-01

    Full Text Available Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color, physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1 and it was higher in males (84.3% compared to females (76.1%; p<0.001. In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure.

  8. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Directory of Open Access Journals (Sweden)

    Wen-Chung Lee

    Full Text Available Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk. The excess relative risk (ERR used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  9. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Science.gov (United States)

    Lee, Wen-Chung

    2014-01-01

    Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio) to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk). The excess relative risk (ERR) used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices) is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense) between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices) in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided) using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  10. Analysis of factors associated with excess weight in school children

    Directory of Open Access Journals (Sweden)

    Renata Paulino Pinto

    Full Text Available Abstract Objective: To determine the prevalence of overweight and obesity in schoolchildren aged 10 to 16 years and its association with dietary and behavioral factors. Methods: Cross-sectional study that evaluated 505 adolescents using a structured questionnaire and anthropometric data. The data was analyzed through the T Test for independent samples and Mann-Whitney Test to compare means and medians, respectively, and Chi2 Test for proportions. Prevalence ratio (RP and the 95% confidence interval was used to estimate the degree of association between variables. The logistic regression was employed to adjust the estimates to confounding factors. The significance level of 5% was considered for all analysis. Results: Excess weight was observed in 30.9% of the schoolchildren: 18.2% of overweight and 12.7% of obesity. There was no association between weight alterations and dietary/behavioral habits in the bivariate and multivariate analyses. However, associations were observed in relation to gender. Daily consumption of sweets [PR=0.75 (0.64-0.88] and soft drinks [PR=0.82 (0.70-0.97] was less frequent among boys; having lunch daily was slightly more often reported by boys [OR=1.11 (1.02-1.22]. Physical activity practice of (≥3 times/week was more often mentioned by boys and the association measures disclosed two-fold more physical activity in this group [PR=2.04 (1.56-2.67] when compared to girls. Approximately 30% of boys and 40% of girls stated they did not perform activities requiring energy expenditure during free periods, with boys being 32% less idle than girls [PR=0.68 (0.60-0.76]. Conclusions: A high prevalence of both overweight and obesity was observed, as well as unhealthy habits in the study population, regardless of the presence of weight alterations. Health promotion strategies in schools should be encouraged, in order to promote healthy habits and behaviors among all students.

  11. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  12. GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity

    CERN Document Server

    Leonard, Adrienne; Starck, Jean-Luc

    2013-01-01

    We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...

  13. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  14. Fast and spectrally accurate summation of 2-periodic Stokes potentials

    CERN Document Server

    Lindbo, Dag

    2011-01-01

    We derive a Ewald decomposition for the Stokeslet in planar periodicity and a novel PME-type O(N log N) method for the fast evaluation of the resulting sums. The decomposition is the natural 2P counterpart to the classical 3P decomposition by Hasimoto, and is given in an explicit form not found in the literature. Truncation error estimates are provided to aid in selecting parameters. The fast, PME-type, method appears to be the first fast method for computing Stokeslet Ewald sums in planar periodicity, and has three attractive properties: it is spectrally accurate; it uses the minimal amount of memory that a gridded Ewald method can use; and provides clarity regarding numerical errors and how to choose parameters. Analytical and numerical results are give to support this. We explore the practicalities of the proposed method, and survey the computational issues involved in applying it to 2-periodic boundary integral Stokes problems.

  15. Excess hafnium-176 in meteorites and the early Earth zircon record

    DEFF Research Database (Denmark)

    Bizzarro, Martin; Connelly, J.N.; Thrane, K.

    2012-01-01

    suggesting early extraction of a continental crust (>4.5 Gyr) but fail to identify a prevalent complementary depleted mantle reservoir, suggesting that crust formation processes in the early Earth were fundamentally distinct from today. However, this conclusion assumes that the Hf-isotope composition of bulk...... chondrite meteorites can be used to estimate the composition of Earth prior to its differentiation into major silicate reservoirs, namely the bulk silicate Earth (BSE). We report a Lu- Hf internal mineral isochron age of 4869 34 Myr for the pristine SAH99555 angrite meteorite. This age is ~300 Myr older...... than the age of the Solar System, confirming the existence of an energetic process yielding excess Hf in affected early formed Solar System objects through the production of the Lu isomer (t ~3.9 hours). Thus, chondrite meteorites contain excess Hf and their present-day composition cannot be used...

  16. Suzaku observations of X-ray excess emission in the cluster of galaxies A 3112

    Science.gov (United States)

    Lehto, T.; Nevalainen, J.; Bonamente, M.; Ota, N.; Kaastra, J.

    2010-12-01

    Aims: We analysed the Suzaku XIS1 data of the A 3112 cluster of galaxies in order to examine the X-ray excess emission in this cluster reported earlier with the XMM-Newton and Chandra satellites. Methods: We performed X-ray spectroscopy on the data of a single large region. We carried out simulations to estimate the systematic uncertainties affecting the X-ray excess signal. Results: The best-fit temperature of the intracluster gas depends strongly on the choice of the energy band used for the spectral analysis. This proves the existence of excess emission component in addition to the single-temperature MEKAL in A 3112. We showed that this effect is not an artifact due to uncertainties of the background modeling, instrument calibration or the amount of Galactic absorption. Neither does the PSF scatter of the emission from the cool core nor the projection of the cool gas in the cluster outskirts produce the effect. Finally we modeled the excess emission either by using an additional MEKAL or powerlaw component. Due to the small differencies between thermal and non-thermal model we can not rule out the non-thermal origin of the excess emission based on the goodness of the fit. Assuming that it has a thermal origin, we further examined the differential emission measure (DEM) models. We utilised two different DEM models, a Gaussian differential emission measure distribution (GDEM) and WDEM model, where the emission measure of a number of thermal components is distributed as a truncated power law. The best-fit XIS1 MEKAL temperature for the 0.4-7.0 keV band is 4.7 ± 0.1 keV, consistent with that obtained using GDEM and WDEM models.

  17. Protective effect of D-ribose against inhibition of rats testes function at excessive exercise

    Directory of Open Access Journals (Sweden)

    Chigrinskiy E.A.

    2011-09-01

    Full Text Available An increasing number of research studies point to participation in endurance exercise training as having significant detrimental effects upon reproductive hormonal profiles in men. The means used for prevention and correction of fatigue are ineffective for sexual function recovery and have contraindications and numerous side effects. The search for substances effectively restoring body functions after overtraining and at the same time sparing the reproductive function, which have no contraindications precluding their long and frequent use, is an important trend of studies. One of the candidate substances is ribose used for correction of fatigue in athletes engaged in some sports.We studied the role of ribose deficit in metabolism of the testes under conditions of excessive exercise and the potentialities of ribose use for restoration of the endocrine function of these organs.45 male Wistar rats weighing 240±20 g were used in this study. Animals were divided into 3 groups (n=15: control; excessive exercise; excessive exercise and received ribose treatment. Plasma concentrations of lactic, β-hydroxybutyric, uric acids, luteinizing hormone, total and free testosterone were measured by biochemical and ELISA methods. The superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase and glucose-6-phosphate dehydrogenase activities and uric acids, malondialdehyde, glutathione, ascorbic acids, testosterone levels were estimated in the testes sample.Acute disorders of purine metabolism develop in rat testes under conditions of excessive exercise. These disorders are characterized by enhanced catabolism and reduced reutilization of purine mononucleotides and activation of oxidative stress against the background of reduced activities of the pentose phosphate pathway and antioxidant system. Administration of D-ribose to rats subjected to excessive exercise improves purine reutilization, stimulates the pentose phosphate pathway work

  18. EEG-derived estimators of present and future cognitive performance

    Directory of Open Access Journals (Sweden)

    Maja eStikic

    2011-08-01

    Full Text Available Previous EEG-based fatigue-related research primarily focused on the association between concurrent cognitive performance and time-locked physiology. The goal of this study was to investigate the capability of EEG to assess the impact of fatigue on both present and future cognitive performance during a 20min sustained attention task, the 3-Choice Active Vigilance Task (3CVT, that requires subjects to discriminate one primary target from two secondary non-target geometric shapes. The current study demonstrated the ability of EEG to estimate not only present, but also future cognitive performance, utilizing a single, combined reaction time and accuracy performance metric. The correlations between observed and estimated performance, for both present and future performance, were strong (up to 0.89 and 0.79, respectively. The models were able to consistently estimate unacceptable performance throughout the entire 3CVT, i.e., excessively missed responses and/or slow reaction times, while acceptable performance was recognized less accurately later in the task. The developed models were trained on a relatively large dataset (n=50 subjects to increase stability. Cross-validation results suggested the models were not over-fitted. This study indicates that EEG can be used to predict gross-performance degradations 5 to 15min in advance.

  19. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  20. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  1. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  2. Excess Molar Volume of Binary Systems Containing Mesitylene

    Directory of Open Access Journals (Sweden)

    Morávková, L.

    2013-05-01

    Full Text Available This paper presents a review of density measurements for binary systems containing 1,3,5-trimethylbenzene (mesitylene with a variety of organic compounds at atmospheric pressure. Literature data of the binary systems were divided into nine basic groups by the type of contained organic compound with mesitylene. The excess molar volumes calculated from the experimental density values have been compared with literature data. Densities were measured by a few experimental methods, namely using a pycnometer, a dilatometer or a commercial apparatus. The overview of the experimental data and shape of the excess molar volume curve versus mole fraction is presented in this paper. The excess molar volumes were correlated by Redlich–Kister equation. The standard deviations for fitting of excess molar volume versus mole fraction are compared. Found literature data cover a huge temperature range from (288.15 to 343.15 K.

  3. Iodine deficiency and iodine excess in Jiangsu Province, China

    NARCIS (Netherlands)

    Zhao, J.

    2001-01-01

    Keywords:iodine deficiency, iodine excess, endemic goiter, drinking water, iodine intake, thyroid function, thyroid size, iodized salt, iodized oil, IQ, physical development, hearing capacity, epidemiology, meta-analysis, IDD, randomized trial, intervention, USA, Bangladesh, ChinaEndemic goiter can

  4. Characteristics of adolescent excessive drinkers compared with consumers and abstainers

    NARCIS (Netherlands)

    Tomcikova, Zuzana; Geckova, Andrea Madarasova; van Dijk, Jitse P.; Reijneveld, Sijmen A.

    2011-01-01

    Introduction and Aims. This study aimed at comparing adolescent abstainers, consumers and excessive drinkers in terms of family characteristics (structure of family, socioeconomic factors), perceived social support, personality characteristics (extraversion, self-esteem, aggression) and well-being.

  5. Diseases and disorders associated with excess body weight.

    Science.gov (United States)

    Knight, Joseph A

    2011-01-01

    Excess body weight is a very serious problem, especially in North America and Europe. It has been referred to as a "pandemic" since it has progressively increased over the past several decades. Moreover, excess body weight significantly increases the risk of numerous diseases and clinical disorders, including all-cause mortality, coronary and cerebrovascular diseases, various cancers, type 2 diabetes mellitus, hypertension, liver disease and asthma, as well as psychopathology, among others. Unfortunately, overweight and obesity are now common in both young children and adolescents. Although the causes of excess body weight are multi-factorial, the most important factors are excess caloric intake coupled with limited energy expenditure. Therefore, lifestyle modification can significantly reduce the risk of morbidity and mortality and thereby increase longevity and improve the quality of life.

  6. Gene Linked to Excess Male Hormones in Female Infertility Disorder

    Science.gov (United States)

    ... Gene linked to excess male hormones in female infertility disorder Discovery by NIH-supported researchers may lead ... androgens, symptoms of PCOS include irregular menstrual cycles, infertility, and insulin resistance (difficulty using insulin.) The condition ...

  7. Study Counters Link Between Excess Pregnancy Weight and Overweight Kids

    Science.gov (United States)

    ... Study Counters Link Between Excess Pregnancy Weight and Overweight Kids Connection is likely in the genes, researchers ... 24, 2017 (HealthDay News) -- Kids whose moms were overweight during pregnancy have increased odds of being overweight ...

  8. Changes in Blood Components in Aphtha Patients with Excess Heat.

    Science.gov (United States)

    Qin, Lu; Li, Yan; Jiao, Yifeng; Fu, Danqing; Ye, Li; Ji, Jinjun; Xie, Guanqun; Fan, Yongsheng; Xu, Li

    2016-01-01

    "Superior heat" is a popularization expression in TCM heat syndrome and has no counterpart in the modern medical system concept. Oral ulcer is considered to be a kind of clinical manifestation of "superior heat." Aphtha is a common and frequently occurring disease, which can be divided into excess heat and Yin deficiency. The aphtha of excess heat manifests the syndromes of acute occurrence, severe local symptoms, obvious swelling and pain, red tongue, yellow coating, and fast-powerful pulse. In this study, we found that there was an abnormal immune regulation in aphtha patients induced by excess heat. There are changes in the blood components, including abnormal serum protein expression (IL-4, MMP-19, MMP-9, and Activin A) and a higher percentage of CD4(+)CD25(+)Treg cells in the peripheral blood lymphocytes of the EXP group. Changes in the blood environment may be an important factor in the occurrence of aphtha caused by excess heat.

  9. Fetal Programming of Obesity: Maternal Obesity and Excessive Weight Gain

    OpenAIRE

    Seray Kabaran

    2014-01-01

    The prevalence of obesity is an increasing health problem throughout the world. Maternal pre-pregnancy weight, maternal nutrition and maternal weight gain are among the factors that can cause childhood obesity. Both maternal obesity and excessive weight gain increase the risks of excessive fetal weight gain and high birth weight. Rapid weight gain during fetal period leads to changes in the newborn body composition. Specifically, the increase in body fat ratio in the early periods is associat...

  10. An opportunity of application of excess factor in hydrology

    OpenAIRE

    Kovalenko, V.(V. Fock Institute for Physics, St. Petersburg State University, St. Petersburg, Russia); Gaidukova, E.; Kachalova, A.

    2012-01-01

    In last few years in hydrology an interest to excess factor has appeared as a reaction to unsuccessful attempts to simulate and predict evolving hydrological processes, which attributive property is statistical instability. The article shows, that the latter has a place at strong relative multiplicative noises of probabilistic stochastic model of a river flow formation, phenomenological display of which are "the thick tails" and polymodality, for which the excess factor "answers", by being ig...

  11. Limit Theorems For Closed Queuing Networks With Excess Of Servers

    OpenAIRE

    Tsitsiashvili, G.

    2013-01-01

    In this paper limit theorems for closed queuing networks with excess of servers are formulated and proved. First theorem is a variant of the central limit theorem and is proved using classical results of V.I. Romanovskiy for discrete Markov chains. Second theorem considers a convergence to chi square distribution. These theorems are mainly based on an assumption of servers excess in queuing nodes.

  12. When does the mean excess plot look linear?

    CERN Document Server

    Ghosh, Souvik

    2010-01-01

    In risk analysis, the mean excess plot is a commonly used exploratory plotting technique for confirming iid data is consistent with a generalized Pareto assumption for the underlying distribution, since in the presence of such a distribution thresholded data have a mean excess plot that is roughly linear. Does any other class of distributions share this linearity of the plot? Under some extra assumptions, we are able to conclude that only the generalized Pareto family has this property.

  13. Mode-dependent attenuation of optical fibers: excess loss.

    Science.gov (United States)

    Olshansky, R; Nolan, D A

    1976-04-01

    A theory is presented for calculating the excess loss produced by random perturbations of optical fibers. The theory is applicable to perturbations whose longitudinal spatial frequencies are below the range required for mode coupling. To illustrate the method, losses due to diameter variations are calculated for the case of a step-index optical fiber. The diameter variations are found to produce a strong attenuation of the higher order modes. The total excess loss is approximately wavelength independent.

  14. Asymmetric Dark Matter Models and the LHC Diphoton Excess

    DEFF Research Database (Denmark)

    Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    The existence of dark matter (DM) and the origin of the baryon asymmetry are persistent indications that the SM is incomplete. More recently, the ATLAS and CMS experiments have observed an excess of diphoton events with invariant mass of about 750 GeV. One interpretation of this excess is decays...... have for models of asymmetric DM that attempt to account for the similarity of the dark and visible matter abundances....

  15. Searching For Infrared Excesses Around White Dwarf Stars

    Science.gov (United States)

    Deeb Wilson, Elin; Rebull, Luisa M.; Debes, John H.; Stark, Chris

    2017-01-01

    Many WDs have been found to be “polluted,” meaning they contain heavier elements in their atmospheres. Either an active process that counters gravitational settling is taking place, or an external mechanism is the cause. One proposed external mechanism for atmospheric pollution of WDs is the disintegration and accretion of rocky bodies, which would result in a circumstellar (CS) disk. As CS disks are heated, they emit excess infrared (IR) emission. WDs with IR excesses indicative of a CS disk are known as dusty WDs. Statistical studies are still needed to determine how numerous dusty, polluted WDs are, along with trends and correlations regarding rate of planetary accretion, the lifetimes of CS disks, and the structure and evolution of CS disks. These findings will allow for a better understanding of the fates of planets along with potential habitability of surviving planets.In this work, we are trying to confirm IR excesses around a sample of 69 WD stars selected as part of the WISE InfraRed Excesses around Degenerates (WIRED) Survey (Debes et al. 2011). We have archival data from WISE, Spitzer, 2MASS, DENIS, and SDSS. The targets were initially selected from the Sloan Digital Sky Survey (SDSS), and identified as containing IR excesses based on WISE data. We also have data from the Four Star Infrared Camera array, which is part of Carnegie Institution’s Magellan 6.5 meter Baade Telescope located at Las Campanas Observatory in Chile. These Four Star data are much higher spatial resolution than the WISE data that were used to determine if each WD has an IR excess. There are often not many bands delineating the IR excess portion of the SED; therefore, we are using the Four Star data to check if there is another source in the WISE beam affecting the IR excess.

  16. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  17. Methods for age estimation

    Directory of Open Access Journals (Sweden)

    D. Sümeyra Demirkıran

    2014-03-01

    Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records

  18. Influenza excess mortality from 1950-2000 in tropical Singapore.

    Directory of Open Access Journals (Sweden)

    Vernon J Lee

    Full Text Available INTRODUCTION: Tropical regions have been shown to exhibit different influenza seasonal patterns compared to their temperate counterparts. However, there is little information about the burden of annual tropical influenza epidemics across time, and the relationship between tropical influenza epidemics compared with other regions. METHODS: Data on monthly national mortality and population was obtained from 1947 to 2003 in Singapore. To determine excess mortality for each month, we used a moving average analysis for each month from 1950 to 2000. From 1972, influenza viral surveillance data was available. Before 1972, information was obtained from serial annual government reports, peer-reviewed journal articles and press articles. RESULTS: The influenza pandemics of 1957 and 1968 resulted in substantial mortality. In addition, there were 20 other time points with significant excess mortality. Of the 12 periods with significant excess mortality post-1972, only one point (1988 did not correspond to a recorded influenza activity. For the 8 periods with significant excess mortality periods before 1972 excluding the pandemic years, 2 years (1951 and 1953 had newspaper reports of increased pneumonia deaths. Excess mortality could be observed in almost all periods with recorded influenza outbreaks but did not always exceed the 95% confidence limits of the baseline mortality rate. CONCLUSION: Influenza epidemics were the likely cause of most excess mortality periods in post-war tropical Singapore, although not every epidemic resulted in high mortality. It is therefore important to have good influenza surveillance systems in place to detect influenza activity.

  19. How Dusty Is Alpha Centauri? Excess or Non-excess over the Infrared Photospheres of Main-sequence Stars

    Science.gov (United States)

    Wiegert, J.; Liseau, R.; Thebault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; Augereau, J. C.; Aran, A. Bayo; Danchi, W. C.; del Burgo, C.; Ertel, S.; Fridlund, M. C. W.; Hajigholi, M.; Krivov, A. V.; Pilbratt, G. L.; Roberge, A.; White, G. J.; Wolf, S.

    2014-01-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims. We aim to determine the level of emission from debris around the stars in the Cen system. This requires knowledge of their photospheres.Having already detected the temperature minimum, Tmin, of CenA at far-infrared wavelengths, we here attempt to do the same for the moreactive companion Cen B. Using the Cen stars as templates, we study the possible eects that Tmin may have on the detectability of unresolveddust discs around other stars. Methods.We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in thefar infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunctionwith radiative transfer calculations, were used to estimate the amount of debris around these stars. Results. For solar-type stars more distant than Cen, a fractional dust luminosity fd LdustLstar 2 107 could account for SEDs that do not exhibit the Tmin eect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared,slight excesses at the 2:5 level are observed at 24 m for both CenA and B, which, if interpreted as due to zodiacal-type dust emission, wouldcorrespond to fd (13) 105, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dustgrains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the Cen stars, viz.4106 M$ of 4 to 1000 msize grains, distributed according to n(a) a3:5. Similarly, for filled-in Tmin

  20. Excess mortality in the Soviet Union: a reconsideration of the demographic consequences of forced industrialization 1929-1949.

    Science.gov (United States)

    Rosefielde, S

    1983-01-01

    A reconsideration of the extent of excess mortality resulting from the policy of forced industrialization in the USSR between 1929 and 1949 is presented. The study is based on recently published, adjusted serial data on natality in the 1930s and on data from the suppressed census of 1937. These data suggest that excess mortality due to Stalin's policies, including the forced labor camp system, may have involved a minimum of 12.6 million and a maximum of more than 23.5 million deaths. Various alternative estimates using different methods and data sources are compared.

  1. Public health impacts of excess NOx emissions from Volkswagen diesel passenger vehicles in Germany

    Science.gov (United States)

    Chossière, Guillaume P.; Malina, Robert; Ashok, Akshay; Dedoussi, Irene C.; Eastham, Sebastian D.; Speth, Raymond L.; Barrett, Steven R. H.

    2017-03-01

    In September 2015, the Volkswagen Group (VW) admitted the use of ‘defeat devices’ designed to lower emissions measured during VW vehicle testing for regulatory purposes. Globally, 11 million cars sold between 2008 and 2015 are affected, including about 2.6 million in Germany. On-road emissions tests have yielded mean on-road NOx emissions for these cars of 0.85 g km‑1, over four times the applicable European limit of 0.18 g km‑1. This study estimates the human health impacts and costs associated with excess emissions from VW cars driven in Germany. A distribution of on-road emissions factors is derived from existing measurements and combined with sales data and a vehicle fleet model to estimate total excess NOx emissions. These emissions are distributed on a 25 by 28 km grid covering Europe, using the German Federal Environmental Protection Agency’s (UBA) estimate of the spatial distribution of NOx emissions from passenger cars in Germany. We use the GEOS-Chem chemistry-transport model to predict the corresponding increase in population exposure to fine particulate matter and ozone in the European Union, Switzerland, and Norway, and a set of concentration-response functions to estimate mortality outcomes in terms of early deaths and of life-years lost. Integrated over the sales period (2008–2015), we estimate median mortality impacts from VW excess emissions in Germany to be 1200 premature deaths in Europe, corresponding to 13 000 life-years lost and 1.9 billion EUR in costs associated with life-years lost. Approximately 60% of mortality costs occur outside Germany. For the current fleet, we estimate that if on-road emissions for all affected VW vehicles in Germany are reduced to the applicable European emission standard by the end of 2017, this would avert 29 000 life-years lost and 4.1 billion 2015 EUR in health costs (median estimates) relative to a counterfactual case with no recall.

  2. Automatic classification and accurate size measurement of blank mask defects

    Science.gov (United States)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  3. Fast and accurate exhaled breath ammonia measurement.

    Science.gov (United States)

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  4. Noninvasive hemoglobin monitoring: how accurate is enough?

    Science.gov (United States)

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  5. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  6. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  7. Towards Accurate Modeling of Moving Contact Lines

    CERN Document Server

    Holmgren, Hanna

    2015-01-01

    A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...

  8. Accurate lineshape spectroscopy and the Boltzmann constant.

    Science.gov (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  9. Accurate upper body rehabilitation system using kinect.

    Science.gov (United States)

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  10. Accurate thermoplasmonic simulation of metallic nanoparticles

    Science.gov (United States)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  11. EXCESSIVE INTERNET USE AND PSYCHOPATHOLOGY: THE ROLE OF COPING

    Directory of Open Access Journals (Sweden)

    Daria J. Kuss

    2017-02-01

    Full Text Available Objective: In 2013, the American Psychiatric Association included Internet Gaming Disorder in the diagnostic manual as a condition which requires further research, indicating the scientific and clinical community are aware of potential health concerns as a consequence of excessive Internet use. From a clinical point of view, it appears that excessive/addictive Internet use is often comorbid with further psychopathologies and assessing comorbidity is relevant in clinical practice, treatment outcome and prevention as the probability to become addicted to using the Internet accelerates with additional (subclinical symptoms. Moreover, research indicates individuals play computer games excessively to cope with everyday stressors and to regulate their emotions by applying media-focused coping strategies, suggesting pathological computer game players play in order to relieve stress and to avoid daily hassles. The aims of this research were to replicate and extend previous findings and explanations of the complexities of the relationships between excessive Internet use and Internet addiction, psychopathology and dysfunctional coping strategies. Method: Participants included 681 Polish university students sampled using an online battery of validated psychometric instruments. Results: Results of structural equation models revealed dysfunctional coping strategies (i.e., distraction, denial, self-blame, substance use, venting, media use, and behavioural disengagement significantly predict excessive Internet use, and the data fit the theoretical model well. A second SEM showed media-focused coping and substance use coping significantly mediate the relationship between psychopathology (operationalised via the Global Severity Index and excessive Internet use. Conclusions: The findings lend support to the self-medication hypothesis of addictive disorders, and suggest psychopathology and dysfunctional coping have additive effects on excessive Internet use.

  12. Occurrence of invasive pneumococcal disease and number of excess cases due to influenza

    Directory of Open Access Journals (Sweden)

    Penttinen Pasi

    2006-03-01

    Full Text Available Abstract Background Influenza is characterized by seasonal outbreaks, often with a high rate of morbidity and mortality. It is also known to be a cause of significant amount secondary bacterial infections. Streptococcus pneumoniae is the main pathogen causing secondary bacterial pneumonia after influenza and subsequently, influenza could participate in acquiring Invasive Pneumococcal Disease (IPD. Methods In this study, we aim to investigate the relation between influenza and IPD by estimating the yearly excess of IPD cases due to influenza. For this purpose, we use influenza periods as an indicator for influenza activity as a risk factor in subsequent analysis. The statistical modeling has been made in two modes. First, we constructed two negative binomial regression models. For each model, we estimated the contribution of influenza in the models, and calculated number of excess number of IPD cases. Also, for each model, we investigated several lag time periods between influenza and IPD. Secondly, we constructed an "influenza free" baseline, and calculated differences in IPD data (observed cases and baseline (expected cases, in order to estimate a yearly additional number of IPD cases due to influenza. Both modes were calculated using zero to four weeks lag time. Results The analysis shows a yearly increase of 72–118 IPD cases due to influenza, which corresponds to 6–10% per year or 12–20% per influenza season. Also, a lag time of one to three weeks appears to be of significant importance in the relation between IPD and influenza. Conclusion This epidemiological study confirms the association between influenza and IPD. Furthermore, negative binomial regression models can be used to calculate number of excess cases of IPD, related to influenza.

  13. Battery Management Systems: Accurate State-of-Charge Indication for Battery-Powered Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Danilov, D.; Regtien, P.P.L.; Notten, P.H.L.

    2008-01-01

    Battery Management Systems – Universal State-of-Charge indication for portable applications describes the field of State-of-Charge (SoC) indication for rechargeable batteries. With the emergence of battery-powered devices with an increasing number of power-hungry features, accurately estimating the

  14. The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes

    Science.gov (United States)

    Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...

  15. Vortex dynamics in the presence of excess energy for the Landau-Lifschitz-Gilbert equation

    CERN Document Server

    Kurzke, Matthias; Moser, Roger; Spirn, Daniel

    2012-01-01

    We study the Landau-Lifshitz-Gilbert equation for the dynamics of a magnetic vortex system. We present a PDE-based method for proving vortex dynamics that does not rely on strong well-preparedness of the initial data and allows for instantaneous changes in the strength of the gyrovector force due to bubbling events. The main tools are estimates of the Hodge decomposition of the supercurrent and an analysis of the defect measure of weak convergence of the stress energy tensor. Ginzburg-Landau equations with mixed dynamics in the presence of excess energy are also discussed.

  16. Passive dosing of pyrethroid insecticides to Daphnia magna: Expressing excess toxicity by chemical activity

    DEFF Research Database (Denmark)

    Nørgaard Schmidt, Stine; Gan, Jay; Kretschmann, A. C.

    2015-01-01

    Pyrethroid insecticides are nerve poisons and used as active ingredients in pesticide mixtures available for household and agriculture. The compounds are hydrophobic, and their strong sorption to organic material may result in decreasing exposure levels during toxicity tests and consequent......) Effective chemical activities resulting in 50% immobilisation (Ea50) will be estimated from pyrethroid EC50 values via the correlation of sub-cooled liquid solubility (S L, [mmol/L], representing a=1) and octanol to water partitioning ratios (Kow), (3) The excess toxicity observed for pyrethroids...

  17. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...

  18. The Excess Liquidity of the Open Economy and its Management

    Directory of Open Access Journals (Sweden)

    Yonghong TU

    2011-01-01

    Full Text Available The excess liquidity of the open economy has become one main factor influencing the monetary markets, financial markets and even the whole macroeconomic. In era of the post-crisis, many countries have implemented the loose monetary policies, especially the quantitative easing policy in the U.S. which worsened the situation of the excess liquidity. Under this background, it will be more meaningful to study the excess liquidity of the open economy and its management for the developing countries’ economic recovery and development, inflation control, economic structural adjustment and optimization and the stability of the social economy. This paper starts by deep study of the related theories of the excess liquidity and the transmission mechanisms and then has an analysis on the current situation and cause of the excess liquidity in the BRICs which is taken as the representative for the developing countries. And then it comes up with the point that the main cause of the excess liquidity in the developing countries is the financial system, including loose monetary policies, financial innovation, petrodollar, East Asia dollar, US dollar hegemony, overcapacity, trade supply, savings supply and the surge of foreign exchange reserves etc. With the help of the Impulse response model from the VAR model, this paper analyzed on the impact of global liquidity surge to America, the euro zone, Japan, China, India, Russia, Brazil etc. and came to a conclusion: 1. The global excess liquidity keeps increasing. And its speed in the developing countries is fast while slow in the developed countries. 2. The spillover effect of the global excess liquidity spreads mainly through GDP and price. And for most countries, the international factor has more influence on the rising price than the domestic factor. Besides, the GDP has also been affected to fast grow which in turn becomes the main driving force of quantity increase of money. 3. The openness and development

  19. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  20. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  1. Accurate particle position measurement from images

    CERN Document Server

    Feng, Yan; Liu, Bin; 10.1063/1.2735920

    2011-01-01

    The moment method is an image analysis technique for sub-pixel estimation of particle positions. The total error in the calculated particle position includes effects of pixel locking and random noise in each pixel. Pixel locking, also known as peak locking, is an artifact where calculated particle positions are concentrated at certain locations relative to pixel edges. We report simulations to gain an understanding of the sources of error and their dependence on parameters the experimenter can control. We suggest an algorithm, and we find optimal parameters an experimenter can use to minimize total error and pixel locking. Simulating a dusty plasma experiment, we find that a sub-pixel accuracy of 0.017 pixel or better can be attained. These results are also useful for improving particle position measurement and particle tracking velocimetry (PTV) using video microscopy, in fields including colloids, biology, and fluid mechanics.

  2. Investigation of excess thyroid cancer incidence in Los Alamos County

    Energy Technology Data Exchange (ETDEWEB)

    Athas, W.F.

    1996-04-01

    Los Alamos County (LAC) is home to the Los Alamos National Laboratory, a U.S. Department of Energy (DOE) nuclear research and design facility. In 1991, the DOE funded the New Mexico Department of Health to conduct a review of cancer incidence rates in LAC in response to citizen concerns over what was perceived as a large excess of brain tumors and a possible relationship to radiological contaminants from the Laboratory. The study found no unusual or alarming pattern in the incidence of brain cancer, however, a fourfold excess of thyroid cancer was observed during the late-1980`s. A rapid review of the medical records for cases diagnosed between 1986 and 1990 failed to demonstrate that the thyroid cancer excess had resulted from enhanced detection. Surveillance activities subsequently undertaken to monitor the trend revealed that the excess persisted into 1993. A feasibility assessment of further studies was made, and ultimately, an investigation was conducted to document the epidemiologic characteristics of the excess in detail and to explore possible causes through a case-series records review. Findings from the investigation are the subject of this report.

  3. Treating both wastewater and excess sludge with an innovative process

    Institute of Scientific and Technical Information of China (English)

    HE Sheng-bing; WANG Bao-zhen; WANG Lin; JIANG Yi-feng

    2003-01-01

    The innovative process consists of biological unit for wastewater treatment and ozonation unit for excess sludge treatment. An aerobic membrane bioreactor(MBR) was used to remove organics and nitrogen, and an anaerobic reactor was added to the biological unit for the release of phosphorus contained at aerobic sludge to enhance the removal of phosphorus. For the excess sludge produced in the MBR, which was fed to ozone contact column and reacted with ozone, then the ozonated sludge was returned to the MBR for further biological treatment. Experimental results showed that this process could remove organics, nitrogen and phosphorus efficiently, and the removals for COD, NH3-N, TN and TP were 93.17 %, 97.57 %, 82.77 % and 79.5 %, respectively. Batch test indicated that the specific nitrification rate and specific denitrification Under the test conditions, the sludge concentration in the MBR was kept at 5000-6000 mg/L, and the wasted sludge was ozonated at an ozone dosage of 0.10 kgO3/kgSS. During the experimental period of two months, no excess sludge was wasted, and a zero withdrawal of excess sludge was implemented. Through economic analysis, it was found that an additional ozonation operating cost for treatment of both wastewater and excess sludge was only 0.045 RMB Yuan(USD 0.0054)/m3 wastewater.

  4. On the excess of power in high resolution CMB experiments

    CERN Document Server

    Diego-Rodriguez, J M; Martinez-Gonalez, E; Silk, J

    2004-01-01

    We revisit the possibility that an excess in the CMB power spectrum at small angular scales (CBI, ACBAR) can be due to galaxy clusters (or compact sources in general). We perform a Gaussian analysis of ACBAR-like simulated data based on wavelets. We show how models with a significant excess should show a clear non-Gaussian signal in the wavelet space. In particular, a value of the normalization sigma_8 = 1 would imply a highly significant skewness and kurtosis in the wavelet coefficients at scales around 3 arcmin. Models with a more moderate excess also show a non-Gaussian signal in the simulated data. We conclude that current data (ACBAR) should show this signature if the excess is to be due to the SZ effect. Otherwise, the reason for that excess should be explained by some systematic effect. The significance of the non-Gaussian signal depends on the cluster model but it grows with the surveyed area. Non-Gaussianity test performed on incoming data sets should reveal the presence of a cluster population even ...

  5. Prenatal programming: adverse cardiac programming by gestational testosterone excess

    Science.gov (United States)

    Vyas, Arpita K.; Hoang, Vanessa; Padmanabhan, Vasantha; Gilbreath, Ebony; Mietelka, Kristy A.

    2016-01-01

    Adverse events during the prenatal and early postnatal period of life are associated with development of cardiovascular disease in adulthood. Prenatal exposure to excess testosterone (T) in sheep induces adverse reproductive and metabolic programming leading to polycystic ovarian syndrome, insulin resistance and hypertension in the female offspring. We hypothesized that prenatal T excess disrupts insulin signaling in the cardiac left ventricle leading to adverse cardiac programming. Left ventricular tissues were obtained from 2-year-old female sheep treated prenatally with T or oil (control) from days 30–90 of gestation. Molecular markers of insulin signaling and cardiac hypertrophy were analyzed. Prenatal T excess increased the gene expression of molecular markers involved in insulin signaling and those associated with cardiac hypertrophy and stress including insulin receptor substrate-1 (IRS-1), phosphatidyl inositol-3 kinase (PI3K), Mammalian target of rapamycin complex 1 (mTORC1), nuclear factor of activated T cells –c3 (NFATc3), and brain natriuretic peptide (BNP) compared to controls. Furthermore, prenatal T excess increased the phosphorylation of PI3K, AKT and mTOR. Myocardial disarray (multifocal) and increase in cardiomyocyte diameter was evident on histological investigation in T-treated females. These findings support adverse left ventricular remodeling by prenatal T excess. PMID:27328820

  6. Prenatal programming: adverse cardiac programming by gestational testosterone excess.

    Science.gov (United States)

    Vyas, Arpita K; Hoang, Vanessa; Padmanabhan, Vasantha; Gilbreath, Ebony; Mietelka, Kristy A

    2016-06-22

    Adverse events during the prenatal and early postnatal period of life are associated with development of cardiovascular disease in adulthood. Prenatal exposure to excess testosterone (T) in sheep induces adverse reproductive and metabolic programming leading to polycystic ovarian syndrome, insulin resistance and hypertension in the female offspring. We hypothesized that prenatal T excess disrupts insulin signaling in the cardiac left ventricle leading to adverse cardiac programming. Left ventricular tissues were obtained from 2-year-old female sheep treated prenatally with T or oil (control) from days 30-90 of gestation. Molecular markers of insulin signaling and cardiac hypertrophy were analyzed. Prenatal T excess increased the gene expression of molecular markers involved in insulin signaling and those associated with cardiac hypertrophy and stress including insulin receptor substrate-1 (IRS-1), phosphatidyl inositol-3 kinase (PI3K), Mammalian target of rapamycin complex 1 (mTORC1), nuclear factor of activated T cells -c3 (NFATc3), and brain natriuretic peptide (BNP) compared to controls. Furthermore, prenatal T excess increased the phosphorylation of PI3K, AKT and mTOR. Myocardial disarray (multifocal) and increase in cardiomyocyte diameter was evident on histological investigation in T-treated females. These findings support adverse left ventricular remodeling by prenatal T excess.

  7. Infrared Excess and Molecular Gas in Galactic Supershells

    CERN Document Server

    Lee, J E; Koo, B C; Lee, Jeong-Eun; Kim, Kee-Tae; Koo, Bon-Chul

    1999-01-01

    We have carried out high-resolution observations along one-dimensional cuts through the three Galactic supershells GS 064-01-97, GS 090-28-17, and GS 174+02-64 in the HI 21 cm and CO J=1-0 lines. By comparing the HI data with IRAS data, we have derived the distributions of the I_100 and tau_100 excesses, which are, respectively, the 100 mum intensity and 100 mum optical depth in excess of what would be expected from HI emission. We have found that both the I_100 and tau_100 excesses have good correlations with the CO integrated intensity W_CO in all three supershells. But the I_100 excess appears to underestimate H_2 column density N(H_2) by factors of 1.5-3.8. This factor is the ratio of atomic to molecular infrared emissivities, and we show that it can be roughly determined from the HI and IRAS data. By comparing the tau_100 excess with W_CO, we derive the conversion factor X = N(H_2)/W_CO = 0.26-0.66 in the three supershells. In GS 090-28-17, which is a very diffuse shell, our result suggests that the regi...

  8. Searching for IR excesses in Sun-like stars observed by WISE

    CERN Document Server

    de Miera, Fernando Cruz-Saenz; Bertone, Emanuele; Vega, Olga

    2013-01-01

    We present the results of a search of infrared excess candidates in a comprehensive (29\\,000 stars) magnitude limited sample of dwarf stars, spanning the spectral range F2-K0, and brighter than V$=$15 mag. We searched the sample within the {\\em WISE} all sky survey database for objects within 1 arcsecond of the coordinates provided by SIMBAD database and found over 9\\,000 sources detected in all {\\em WISE} bands. This latter sample excludes objects that are flagged as extended sources and those images which are affected by various optical artifacts. For each detected object, we compared the observed W4/W2 (22$\\mu$m/4.6$\\mu$m) flux ratio with the expected photospheric value and identified 197 excess candidates at 3$\\sigma$. For the vast majority of candidates, the results of this analysis represent the first reported evidence of an IR excess. Through the comparison with a simple black-body emission model, we derive estimates of the dust temperature, as well as of the dust fractional luminosities. For more than...

  9. BMI predicts emotion-driven impulsivity and cognitive inflexibility in adolescents with excess weight.

    Science.gov (United States)

    Delgado-Rico, Elena; Río-Valle, Jacqueline S; González-Jiménez, Emilio; Campoy, Cristina; Verdejo-García, Antonio

    2012-08-01

    Adolescent obesity is increasingly viewed as a brain-related dysfunction, whereby reward-driven urges for pleasurable foods "hijack" response selection systems, such that behavioral control progressively shifts from impulsivity to compulsivity. In this study, we aimed to examine the link between personality factors (sensitivity to reward (SR) and punishment (SP), BMI, and outcome measures of impulsivity vs. flexibility in--otherwise healthy--excessive weight adolescents. Sixty-three adolescents (aged 12-17) classified as obese (n = 26), overweight (n = 16), or normal weight (n = 21) participated in the study. We used psychometric assessments of the SR and SP motivational systems, impulsivity (using the UPPS-P scale), and neurocognitive measures with discriminant validity to dissociate inhibition vs. flexibility deficits (using the process-approach version of the Stroop test). We tested the relative contribution of age, SR/SP, and BMI on estimates of impulsivity and inhibition vs. switching performance using multistep hierarchical regression models. BMI significantly predicted elevations in emotion-driven impulsivity (positive and negative urgency) and inferior flexibility performance in adolescents with excess weight--exceeding the predictive capacity of SR and SP. SR was the main predictor of elevations in sensation seeking and lack of premeditation. These findings demonstrate that increases in BMI are specifically associated with elevations in emotion-driven impulsivity and cognitive inflexibility, supporting a dimensional path in which adolescents with excess weight increase their proneness to overindulge when under strong affective states, and their difficulties to switch or reverse habitual behavioral patterns.

  10. Excess Commuting in Transitional Urban China: A Case Study of Guangzhou

    Institute of Scientific and Technical Information of China (English)

    LIU Wangbao; HOU Quan

    2016-01-01

    During the reform era,Chinese cities witnessed dramatic institutional transformation and spatial restructuring in general and profound change of commuting patterns in particular.Using household surveys collected in Guangzhou,China,in 2001,2005 and 2010,excess commuting measurements are estimated.Excess commuting shows an overall trend of increasing during 1990-1999,and then declining during 2000-2010.We argue that deepening marketization of the jobs and housing sectors has induced spatial separation of jobs and housing.In other words,institutional transition and urban spatial restructuring are underpinning the changes of commuting patterns in Chinese cities.Excess commuting has strong relationship with individual socio-demographic status,which is by and large due to the increasing flexibilities of jobs and housing location choices enjoyed by urban residents.The findings call for considerations on balancing jobs-housing in making public policies relevant to urban development in general,and land use and transportation in particular.

  11. Associations Between Excessive Sodium Intake and Smoking and Alcohol Intake Among Korean Men: KNHANES V.

    Science.gov (United States)

    Choi, Kyung-Hwa; Park, Myung-Sook; Kim, Jung Ae; Lim, Ji-Ae

    2015-12-08

    In this study, we evaluated the associations of smoking and alcohol intake, both independently and collectively, with sodium intake in Korean men. Subjects (6340 men) were from the fifth Korean National Health Examination Survey (2010-2012). Smoking-related factors included smoking status, urinary cotinine level, and pack-years of smoking. Food intake was assessed using a 24-h recall. The odds of excessive sodium intake were estimated using survey logistic regression analysis. The smoking rate was 44.1%. The geometric mean of the urinary cotinine level was 0.05 µg/mL, and the median (min-max) pack-years of smoking was 13.2 (0-180). When adjusted for related factors, the odds (95% confidence interval) of excessive sodium intake were 1.54 (1.00, 2.37), 1.55 (1.23, 1.94), 1.44 (1.07, 1.95), and 1.37 (1.11, 1.68) times higher in the group exposed to smoking and drinking than in the group that never smoked nor drank, the group that never smoked and drank smoke and never drank, and the group that did not currently smoke or drink smoking and alcohol intake (p-interaction = 0.02). The results suggest that simultaneous exposure to smoking and alcohol intake is associated with increased odds of excessive sodium intake.

  12. Permanent demand excess as business strategy: an analysis of the Brazilian higher-education market

    Directory of Open Access Journals (Sweden)

    Rodrigo Menon Simões Moita

    2015-03-01

    Full Text Available Many Higher Education Institutions (HEIs establish tuition below the equilibrium price to generate permanent demand excess. This paper first adapts Becker’s (1991 theory to understand why the HEIs price in this way. The fact that students are both consumers and inputs on the education production process gives rise to a market equilibrium where some firms have excess demand and charge high prices, and others charge low prices and have empty seats.Second, the paper analyzes this equilibrium empirically. We estimated the demand for undergraduate courses in Business Administration in the State of São Paulo. The results show that tuition, quality of incoming students and percentage of lecturers holding doctorates degrees are the determining factors of students’ choice. Since the student quality determines the demand for a HEI, it is calculated what the value is for a HEI to get better students; that is the total revenue that each HEI gives up to guarantee excess demand. Regarding the “investment” in selectivity, 39 HEIs in São Paulo give up a combined R$ 5 million (or US$ 3.14 million in revenue per year per freshman class, which means 7.6% of the revenue coming from a freshman class.

  13. Cause-specific excess mortality in siblings of patients co-infected with HIV and hepatitis C virus

    DEFF Research Database (Denmark)

    Hansen, Ann-Brit Eg; Lohse, Nicolai; Gerstoft, Jan;

    2007-01-01

    -years, compared with siblings of matched population controls. Substance abuse-related deaths contributed most to the elevated mortality among siblings [EMR = 2.25 (1.09-3.40)] followed by unnatural deaths [EMR = 0.67 (-0.05-1.39)]. No siblings of HIV/HCV co-infected patients had a liver-related diagnosis......BACKGROUND: Co-infection with hepatitis C in HIV-infected individuals is associated with 3- to 4-fold higher mortality among these patients' siblings, compared with siblings of mono-infected HIV-patients or population controls. This indicates that risk factors shared by family members partially...... account for the excess mortality of HIV/HCV-co-infected patients. We aimed to explore the causes of death contributing to the excess sibling mortality. METHODOLOGY AND PRINCIPAL FINDINGS: We retrieved causes of death from the Danish National Registry of Deaths and estimated cause-specific excess mortality...

  14. Cause-specific excess mortality in siblings of patients co-infected with HIV and hepatitis C virus

    DEFF Research Database (Denmark)

    Hansen, Ann-Brit Eg; Lohse, Nicolai; Gerstoft, Jan;

    2007-01-01

    BACKGROUND: Co-infection with hepatitis C in HIV-infected individuals is associated with 3- to 4-fold higher mortality among these patients' siblings, compared with siblings of mono-infected HIV-patients or population controls. This indicates that risk factors shared by family members partially...... account for the excess mortality of HIV/HCV-co-infected patients. We aimed to explore the causes of death contributing to the excess sibling mortality. METHODOLOGY AND PRINCIPAL FINDINGS: We retrieved causes of death from the Danish National Registry of Deaths and estimated cause-specific excess mortality...... rates (EMR) for siblings of HIV/HCV-co-infected individuals (n = 436) and siblings of HIV mono-infected individuals (n = 1837) compared with siblings of population controls (n = 281,221). Siblings of HIV/HCV-co-infected individuals had an all-cause EMR of 3.03 (95% CI, 1.56-4.50) per 1,000 person...

  15. Accurate skin dose measurements using radiochromic film in clinical applications.

    Science.gov (United States)

    Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G

    2006-04-01

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  16. Millimeter and submillimeter excess emission in M 33 revealed by Planck and LABOCA

    Science.gov (United States)

    Hermelo, I.; Relaño, M.; Lisenfeld, U.; Verley, S.; Kramer, C.; Ruiz-Lara, T.; Boquien, M.; Xilouris, E. M.; Albrecht, M.

    2016-05-01

    Context. Previous studies have shown the existence of an excess of emission at submillimeter (submm) and millimeter (mm) wavelengths in the spectral energy distribution (SED) of many low-metallicity galaxies. The so-called "submm excess", whose origin remains unknown, challenges our understanding of the dust properties in low-metallicity environments. Aims: The goal of the present study is to model separately the emission from the star forming (SF) component and the emission from the diffuse interstellar medium (ISM) in the nearby spiral galaxy M 33 in order to determine whether both components can be well fitted using radiation transfer models or whether there is an excess of submm emission associated with one or both of them. Methods: We decomposed the observed SED of M 33 into its SF and diffuse components. Mid-infrared (MIR) and far-infrared (FIR) fluxes were extracted from Spitzer and Herschel data. At submm and mm wavelengths, we used ground-based observations from APEX to measure the emission from the SF component and data from the Planck space telescope to estimate the diffuse emission. Both components were separately fitted using radiation transfer models based on standard dust properties (i.e., emissivity index β = 2) and a realistic geometry. The large number of previous studies helped us to estimate the thermal radio emission and to constrain an important part of the input parameters of the models. Both modeled SEDs were combined to build the global SED of M 33. In addition, the radiation field necessary to power the dust emission in our modeling was compared with observations from GALEX, Sloan, and Spitzer. Results: Our modeling is able to reproduce the observations at MIR and FIR wavelengths, but we found a strong excess of emission at submm and mm wavelengths where the model expectations severely underestimate the LABOCA and Planck fluxes. We also found that the ultraviolet (UV) radiation escaping the galaxy is 70% higher than the model predictions

  17. Excess body weight during pregnancy and offspring obesity: potential mechanisms.

    Science.gov (United States)

    Paliy, Oleg; Piyathilake, Chandrika J; Kozyrskyj, Anita; Celep, Gulcin; Marotta, Francesco; Rastmanesh, Reza

    2014-03-01

    The rates of child and adult obesity have increased in most developed countries over the past several decades. The health consequences of obesity affect both physical and mental health, and the excess body weight can be linked to an elevated risk for developing type 2 diabetes, cardiovascular problems, and depression. Among the factors that can influence the development of obesity are higher infant weights and increased weight gain, which are associated with higher risk for excess body weight later in life. In turn, mother's excess body weight during and after pregnancy can be linked to the risk for offspring overweight and obesity through dietary habits, mode of delivery and feeding, breast milk composition, and through the influence on infant gut microbiota. This review considers current knowledge of these potential mechanisms that threaten to create an intergenerational cycle of obesity.

  18. The 750 GeV diphoton excess and SUSY

    CERN Document Server

    Heinemeyer, S

    2016-01-01

    The LHC experiments ATLAS and CMS have reported an excess in the diphoton spectrum at \\sim 750 GeV. At the same time the motivation for Supersymmetry (SUSY) remains unbowed. Consequently, we review briefly the proposals to explain this excess in SUSY, focusing on "pure" (N)MSSM solutions. We then review in more detail a proposal to realize this excess within the NMSSM. In this particular scenario a Higgs boson with mass around 750 GeV decays to two light pseudo-scalar Higgs bosons. Via mixing with the pion these pseudo-scalars decay into a pair of highly collimated photons, which are identified as one photon, thus resulting in the observed signal.

  19. Financial Instability - a Result of Excess Liquidity or Credit Cycles?

    DEFF Research Database (Denmark)

    Heebøll-Christensen, Christian

    This paper compares the financial destabilizing effects of excess liquidity versus credit growth, in relation to house price bubbles and real economic booms. The analysis uses a cointegrated VAR model based on US data from 1987 to 2010, with a particulary focus on the period preceding the global...... financial crisis. Consistent with monetarist theory, the results suggest a stable money supply-demand relation in the period in question. However, the implied excess liquidity only resulted in financial destabilizing effect after year 2000. Meanwhile, the results also point to persistent cycles of real...... house prices and leverage, which appear to have been driven by real credit shocks, in accordance with post-Keynesian theories on financial instability. Importantly, however, these mechanisms of credit growth and excess liquidity are found to be closely related. In regards to the global financial crisis...

  20. Internet Addiction and Excessive Social Networks Use: What About Facebook?

    Science.gov (United States)

    Guedes, Eduardo; Sancassiani, Federica; Carta, Mauro Giovani; Campos, Carlos; Machado, Sergio; King, Anna Lucia Spear; Nardi, Antonio Egidio

    2016-01-01

    Facebook is notably the most widely known and used social network worldwide. It has been described as a valuable tool for leisure and communication between people all over the world. However, healthy and conscience Facebook use is contrasted by excessive use and lack of control, creating an addiction with severely impacts the everyday life of many users, mainly youths. If Facebook use seems to be related to the need to belong, affiliate with others and for self-presentation, the beginning of excessive Facebook use and addiction could be associated to reward and gratification mechanisms as well as some personality traits. Studies from several countries indicate different Facebook addiction prevalence rates, mainly due to the use of a wide-range of evaluation instruments and to the lack of a clear and valid definition of this construct. Further investigations are needed to establish if excessive Facebook use can be considered as a specific online addiction disorder or an Internet addiction subtype.

  1. Financial Instability - a Result of Excess Liquidity or Credit Cycles?

    DEFF Research Database (Denmark)

    Heebøll-Christensen, Christian

    financial crisis. Consistent with monetarist theory, the results suggest a stable money supply-demand relation in the period in question. However, the implied excess liquidity only resulted in financial destabilizing effect after year 2000. Meanwhile, the results also point to persistent cycles of real...... house prices and leverage, which appear to have been driven by real credit shocks, in accordance with post-Keynesian theories on financial instability. Importantly, however, these mechanisms of credit growth and excess liquidity are found to be closely related. In regards to the global financial crisis......This paper compares the financial destabilizing effects of excess liquidity versus credit growth, in relation to house price bubbles and real economic booms. The analysis uses a cointegrated VAR model based on US data from 1987 to 2010, with a particulary focus on the period preceding the global...

  2. The 750 GeV Diphoton Excess and SUSY

    Science.gov (United States)

    Heinemeyer, S.

    The LHC experiments ATLAS and CMS have reported an excess in the diphoton spectrum at ˜750 GeV. At the same time the motivation for Supersymmetry (SUSY) remains unbowed. Consequently, we review briefly the proposals to explain this excess in SUSY, focusing on "pure" (N)MSSM solutions. We then review in more detail a proposal to realize this excess within the NMSSM. In this particular scenario a Higgs boson with mass around 750 GeV decays to two light pseudo-scalar Higgs bosons. Via mixing with the pion these pseudo-scalars decay into a pair of highly collimated photons, which are identified as one photon, thus resulting in the observed signal.

  3. Classification of excessive domestic water consumption using Fuzzy Clustering Method

    Science.gov (United States)

    Zairi Zaidi, A.; Rasmani, Khairul A.

    2016-08-01

    Demand for clean and treated water is increasing all over the world. Therefore it is crucial to conserve water for better use and to avoid unnecessary, excessive consumption or wastage of this natural resource. Classification of excessive domestic water consumption is a difficult task due to the complexity in determining the amount of water usage per activity, especially as the data is known to vary between individuals. In this study, classification of excessive domestic water consumption is carried out using a well-known Fuzzy C-Means (FCM) clustering algorithm. Consumer data containing information on daily, weekly and monthly domestic water usage was employed for the purpose of classification. Using the same dataset, the result produced by the FCM clustering algorithm is compared with the result obtained from a statistical control chart. The finding of this study demonstrates the potential use of the FCM clustering algorithm for the classification of domestic consumer water consumption data.

  4. Internet Addiction and Excessive Social Networks Use: What About Facebook?

    Science.gov (United States)

    Guedes, Eduardo; Sancassiani, Federica; Carta, Mauro Giovani; Campos, Carlos; Machado, Sergio; King, Anna Lucia Spear; Nardi, Antonio Egidio

    2016-01-01

    Facebook is notably the most widely known and used social network worldwide. It has been described as a valuable tool for leisure and communication between people all over the world. However, healthy and conscience Facebook use is contrasted by excessive use and lack of control, creating an addiction with severely impacts the everyday life of many users, mainly youths. If Facebook use seems to be related to the need to belong, affiliate with others and for self-presentation, the beginning of excessive Facebook use and addiction could be associated to reward and gratification mechanisms as well as some personality traits. Studies from several countries indicate different Facebook addiction prevalence rates, mainly due to the use of a wide-range of evaluation instruments and to the lack of a clear and valid definition of this construct. Further investigations are needed to establish if excessive Facebook use can be considered as a specific online addiction disorder or an Internet addiction subtype. PMID:27418940

  5. Excessive computer game playing: evidence for addiction and aggression?

    Science.gov (United States)

    Grüsser, S M; Thalemann, R; Griffiths, M D

    2007-04-01

    Computer games have become an ever-increasing part of many adolescents' day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as "computer/video game addiction" have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample comprising of 7069 gamers answered two questionnaires online. Data revealed that 11.9% of participants (840 gamers) fulfilled diagnostic criteria of addiction concerning their gaming behavior, while there is only weak evidence for the assumption that aggressive behavior is interrelated with excessive gaming in general. Results of this study contribute to the assumption that also playing games without monetary reward meets criteria of addiction. Hence, an addictive potential of gaming should be taken into consideration regarding prevention and intervention.

  6. Fetal Programming of Obesity: Maternal Obesity and Excessive Weight Gain

    Directory of Open Access Journals (Sweden)

    Seray Kabaran

    2014-10-01

    Full Text Available The prevalence of obesity is an increasing health problem throughout the world. Maternal pre-pregnancy weight, maternal nutrition and maternal weight gain are among the factors that can cause childhood obesity. Both maternal obesity and excessive weight gain increase the risks of excessive fetal weight gain and high birth weight. Rapid weight gain during fetal period leads to changes in the newborn body composition. Specifically, the increase in body fat ratio in the early periods is associated with an increased risk of obesity in the later periods. It was reported that over-nutrition during fetal period could cause excessive food intake during postpartum period as a result of metabolic programming. By influencing the fetal metabolism and tissue development, maternal obesity and excessive weight gain change the amounts of nutrients and metabolites that pass to the fetus, thus causing excessive fetal weight gain which in turn increases the risk of obesity. Fetal over-nutrition and excessive weight gain cause permanent metabolic and physiologic changes in developing organs. While mechanisms that affect these organs are not fully understood, it is thought that the changes may occur as a result of the changes in fetal energy metabolism, appetite control, neuroendocrine functions, adipose tissue mass, epigenetic mechanisms and gene expression. In this review article, the effects of maternal body weight and weight gain on fetal development, newborn birth weight and risk of obesity were evaluated, and additionally potential mechanisms that can explain the effects of fetal over-nutrition on the risk of obesity were investigated [TAF Prev Med Bull 2014; 13(5.000: 427-434

  7. Excessive recreational computer use and food consumption behaviour among adolescents

    Directory of Open Access Journals (Sweden)

    Mao Yuping

    2010-08-01

    Full Text Available Abstract Introduction Using the 2005 California Health Interview Survey (CHIS data, we explore the association between excessive recreational computer use and specific food consumption behavior among California's adolescents aged 12-17. Method The adolescent component of CHIS 2005 measured the respondents' average number of hours spent on viewing TV on a weekday, the average number of hours spent on viewing TV on a weekend day, the average number of hours spent on playing with a computer on a weekday, and the average number of hours spent on playing with computers on a weekend day. We recode these four continuous variables into four variables of "excessive media use," and define more than three hours of using a medium per day as "excessive." These four variables are then used in logistic regressions to predict different food consumption behaviors on the previous day: having fast food, eating sugary food more than once, drinking sugary drinks more than once, and eating more than five servings of fruits and vegetables. We use the following variables as covariates in the logistic regressions: age, gender, race/ethnicity, parental education, household poverty status, whether born in the U.S., and whether living with two parents. Results Having fast food on the previous day is associated with excessive weekday TV viewing (O.R. = 1.38, p Conclusion Excessive recreational computer use independently predicts undesirable eating behaviors that could lead to overweight and obesity. Preventive measures ranging from parental/youth counseling to content regulations might be addressing the potential undesirable influence from excessive computer use on eating behaviors among children and adolescents.

  8. Approximations for Estimating Change in Life Expectancy Attributable to Air Pollution in Relation to Multiple Causes of Death Using a Cause Modified Life Table.

    Science.gov (United States)

    Stieb, David M; Judek, Stan; Brand, Kevin; Burnett, Richard T; Shin, Hwashin H

    2015-08-01

    There is considerable debate as to the most appropriate metric for characterizing the mortality impacts of air pollution. Life expectancy has been advocated as an informative measure. Although the life-table calculus is relatively straightforward, it becomes increasingly cumbersome when repeated over large numbers of geographic areas and for multiple causes of death. Two simplifying assumptions were evaluated: linearity of the relation between excess rate ratio and change in life expectancy, and additivity of cause-specific life-table calculations. We employed excess rate ratios linking PM2.5 and mortality from cerebrovascular disease, chronic obstructive pulmonary disease, ischemic heart disease, and lung cancer derived from a meta-analysis of worldwide cohort studies. As a sensitivity analysis, we employed an integrated exposure response function based on the observed risk of PM2.5 over a wide range of concentrations from ambient exposure, indoor exposure, second-hand smoke, and personal smoking. Impacts were estimated in relation to a change in PM2.5 from 19.5 μg/m(3) estimated for Toronto to an estimated natural background concentration of 1.8 μg/m(3) . Estimated changes in life expectancy varied linearly with excess rate ratios, but at higher values the relationship was more accurately represented as a nonlinear function. Changes in life expectancy attributed to specific causes of death were additive with maximum error of 10%. Results were sensitive to assumptions about the air pollution concentration below which effects on mortality were not quantified. We have demonstrated valid approximations comprising expression of change in life expectancy as a function of excess mortality and summation across multiple causes of death.

  9. Status of and performance estimates for QCDOC

    CERN Document Server

    Boyle, P; Christ, N H; Cristian, C; Dong, Z; Gara, A; Joó, B; Jung, C; Kim, C; Levkova, L; Liao, X; Liu, G; Mawhinney, Robert D; Ohta, S; Petrov, K V; Wettig, T; Yamaguchi, A

    2002-01-01

    QCDOC is a supercomputer designed for high scalability at a low cost per node. We discuss the status of the project and provide performance estimates for large machines obtained from cycle accurate simulation of the QCDOC ASIC.

  10. CHARACTERIZING THE STELLAR PHOTOSPHERES AND NEAR-INFRARED EXCESSES IN ACCRETING T TAURI SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    McClure, M. K.; Calvet, N.; Hartmann, L.; Ingleby, L. [Department of Astronomy, University of Michigan, 500 Church Street, 830 Dennison Building, Ann Arbor, MI 48109 (United States); Espaillat, C. [Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Hernandez, J. [Centro de Investigaciones de Astronomia (CIDA), Merida 5101-A (Venezuela, Bolivarian Republic of); Luhman, K. L. [Department of Astronomy and Astrophysics and the Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802 (United States); D' Alessio, P. [Centro de Radioastronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, 58089 Morelia, Michoacan (Mexico); Sargent, B., E-mail: melisma@umich.edu, E-mail: ncalvet@umich.edu, E-mail: lhartm@umich.edu, E-mail: lingleby@umich.edu, E-mail: cespaillat@cfa.harvard.edu, E-mail: hernandj@cida.ve, E-mail: kluhman@astro.psu.edu, E-mail: p.dalessio@astrosmo.unam.mx, E-mail: baspci@rit.edu [Center for Imaging Science and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States)

    2013-05-20

    Using NASA Infrared Telescope Facility SpeX data from 0.8 to 4.5 {mu}m, we determine self-consistently the stellar properties and excess emission above the photosphere for a sample of classical T Tauri stars (CTTS) in the Taurus molecular cloud with varying degrees of accretion. This process uses a combination of techniques from the recent literature as well as observations of weak-line T Tauri stars to account for the differences in surface gravity and chromospheric activity between the T Tauri stars and dwarfs, which are typically used as photospheric templates for CTTS. Our improved veiling and extinction estimates for our targets allow us to extract flux-calibrated spectra of the excess in the near-infrared. We find that we are able to produce an acceptable parametric fit to the near-infrared excesses using a combination of up to three blackbodies. In half of our sample, two blackbodies at temperatures of 8000 K and 1600 K suffice. These temperatures and the corresponding solid angles are consistent with emission from the accretion shock on the stellar surface and the inner dust sublimation rim of the disk, respectively. In contrast, the other half requires three blackbodies at 8000, 1800, and 800 K, to describe the excess. We interpret the combined two cooler blackbodies as the dust sublimation wall with either a contribution from the disk surface beyond the wall or curvature of the wall itself, neither of which should have single-temperature blackbody emission. In these fits, we find no evidence of a contribution from optically thick gas inside the inner dust rim.

  11. Excess cardiovascular mortality associated with cold spells in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Kyncl Jan

    2009-01-01

    Full Text Available Abstract Background The association between cardiovascular mortality and winter cold spells was evaluated in the population of the Czech Republic over 21-yr period 1986–2006. No comprehensive study on cold-related mortality in central Europe has been carried out despite the fact that cold air invasions are more frequent and severe in this region than in western and southern Europe. Methods Cold spells were defined as periods of days on which air temperature does not exceed -3.5°C. Days on which mortality was affected by epidemics of influenza/acute respiratory infections were identified and omitted from the analysis. Excess cardiovascular mortality was determined after the long-term changes and the seasonal cycle in mortality had been removed. Excess mortality during and after cold spells was examined in individual age groups and genders. Results Cold spells were associated with positive mean excess cardiovascular mortality in all age groups (25–59, 60–69, 70–79 and 80+ years and in both men and women. The relative mortality effects were most pronounced and most direct in middle-aged men (25–59 years, which contrasts with majority of studies on cold-related mortality in other regions. The estimated excess mortality during the severe cold spells in January 1987 (+274 cardiovascular deaths is comparable to that attributed to the most severe heat wave in this region in 1994. Conclusion The results show that cold stress has a considerable impact on mortality in central Europe, representing a public health threat of an importance similar to heat waves. The elevated mortality risks in men aged 25–59 years may be related to occupational exposure of large numbers of men working outdoors in winter. Early warnings and preventive measures based on weather forecast and targeted on the susceptible parts of the population may help mitigate the effects of cold spells and save lives.

  12. Cholesterol homeostasis: How do cells sense sterol excess?

    Science.gov (United States)

    Howe, Vicky; Sharpe, Laura J; Alexopoulos, Stephanie J; Kunze, Sarah V; Chua, Ngee Kiat; Li, Dianfan; Brown, Andrew J

    2016-09-01

    Cholesterol is vital in mammals, but toxic in excess. Consequently, elaborate molecular mechanisms have evolved to maintain this sterol within narrow limits. How cells sense excess cholesterol is an intriguing area of research. Cells sense cholesterol, and other related sterols such as oxysterols or cholesterol synthesis intermediates, and respond to changing levels through several elegant mechanisms of feedback regulation. Cholesterol sensing involves both direct binding of sterols to the homeostatic machinery located in the endoplasmic reticulum (ER), and indirect effects elicited by sterol-dependent alteration of the physical properties of membranes. Here, we examine the mechanisms employed by cells to maintain cholesterol homeostasis.

  13. Prevalence of excessive screen time and associated factors in adolescents

    OpenAIRE

    2015-01-01

    Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female) from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyze...

  14. Management of excessive gingival display: Lip repositioning technique

    Directory of Open Access Journals (Sweden)

    Upasana Sthapak

    2015-01-01

    Full Text Available The lips form the frame of a smile and define the esthetic zone. Excessive gingival display during smile is often referred to as "gummy smile". A successful management of excessive gingival display with lip repositioning procedure has shown excellent results. The procedure involves removing a strip of partial thickness mucosa from maxillary vestibule, then suturing it back to the lip mucosa at the level of mucogingival junction. This technique results in restricted muscle pull and a narrow vestibule, thereby reducing the gingival display. In this case gummy smile was treated by modification of Rubinstein and Kostianovsky′s surgical lip repositioning technique which resulted in a harmonious smile.

  15. Interpreting 750 GeV diphoton excess in plain NMSSM

    Directory of Open Access Journals (Sweden)

    Marcin Badziak

    2016-09-01

    Full Text Available NMSSM has enough ingredients to explain the diphoton excess at 750 GeV: singlet-like (pseudo scalar (a s and higgsinos as heavy vector-like fermions. We consider the production of the 750 GeV singlet-like pseudo scalar a from a decay of the doublet-like pseudo scalar A, and the subsequent decay of a into two photons via higgsino loop. We demonstrate that this cascade decay of the NMSSM Higgs bosons can explain the diphoton excess at 750 GeV.

  16. Interpreting 750 GeV diphoton excess in plain NMSSM

    Science.gov (United States)

    Badziak, Marcin; Olechowski, Marek; Pokorski, Stefan; Sakurai, Kazuki

    2016-09-01

    NMSSM has enough ingredients to explain the diphoton excess at 750 GeV: singlet-like (pseudo) scalar (a) s and higgsinos as heavy vector-like fermions. We consider the production of the 750 GeV singlet-like pseudo scalar a from a decay of the doublet-like pseudo scalar A, and the subsequent decay of a into two photons via higgsino loop. We demonstrate that this cascade decay of the NMSSM Higgs bosons can explain the diphoton excess at 750 GeV.

  17. Solving Excess Water Production Problems in Productive Formation

    Directory of Open Access Journals (Sweden)

    Kozyrev Ilya

    2016-01-01

    Full Text Available One of the important developments of the Russian Federation national economy is a petroleum resource. Water shut off techniques are used in the oilfields to avoid the massive water production. We describe a technology for solving excess water production problems focusing on the new gel-based fluid which can be effectively applied for water shutoff. We study the effect of the gel-based fluid solution experimentally to show the feasibility of its treatment the in the near wellbore region to solve the excess water production problem.

  18. Simplified Production of Organic Compounds Containing High Enantiomer Excesses

    Science.gov (United States)

    Cooper, George W. (Inventor)

    2015-01-01

    The present invention is directed to a method for making an enantiomeric organic compound having a high amount of enantiomer excesses including the steps of a) providing an aqueous solution including an initial reactant and a catalyst; and b) subjecting said aqueous solution simultaneously to a magnetic field and photolysis radiation such that said photolysis radiation produces light rays that run substantially parallel or anti-parallel to the magnetic field passing through said aqueous solution, wherein said catalyst reacts with said initial reactant to form the enantiomeric organic compound having a high amount of enantiomer excesses.

  19. The isovector dipole strength in nuclei with extreme neutron excess

    CERN Document Server

    Arteaga, Daniel Pena; Ring, Peter

    2008-01-01

    The E1 strength is systematically analyzed in very neutron-rich Sn nuclei, beyond $^{132}$Sn until $^{166}$Sn, within the Relativistic Quasiparticle Random Phase Approximation. The great neutron excess favors the appearance of a deformed ground state for $^{142-162}$Sn. The evolution of the low-lying strength in deformed nuclei is determined by the interplay of two factors, isospin asymmetry and deformation: while greater neutron excess increases the total low-lying strength, deformation hinders and spreads it. Very neutron rich deformed nuclei may not be as good candidates as stable spherical nuclei like $^{132}$Sn for the experimental study of low-lying E1 strength.

  20. ATLAS on-Z Excess Through Vector-Like Quarks

    CERN Document Server

    Endo, Motoi

    2016-01-01

    We investigate the possibility that the excess observed in the leptonic-$Z +$jets $+\\slashed{E}_T$ ATLAS SUSY search is due to pair productions of a vector-like quark $U$ decaying to the first-generation quarks and $Z$ boson. We find that the excess can be explained within the 2$\\sigma$ (up to 1.4$\\sigma$) level while evading the constraints from the other LHC searches. The preferred range of the mass and branching ratio are $610 0.3$-$0.45$, respectively.

  1. Accurate detection of differential RNA processing

    Science.gov (United States)

    Drewe, Philipp; Stegle, Oliver; Hartmann, Lisa; Kahles, André; Bohnert, Regina; Wachter, Andreas; Borgwardt, Karsten; Rätsch, Gunnar

    2013-01-01

    Deep transcriptome sequencing (RNA-Seq) has become a vital tool for studying the state of cells in the context of varying environments, genotypes and other factors. RNA-Seq profiling data enable identification of novel isoforms, quantification of known isoforms and detection of changes in transcriptional or RNA-processing activity. Existing approaches to detect differential isoform abundance between samples either require a complete isoform annotation or fall short in providing statistically robust and calibrated significance estimates. Here, we propose a suite of statistical tests to address these open needs: a parametric test that uses known isoform annotations to detect changes in relative isoform abundance and a non-parametric test that detects differential read coverages and can be applied when isoform annotations are not available. Both methods account for the discrete nature of read counts and the inherent biological variability. We demonstrate that these tests compare favorably to previous methods, both in terms of accuracy and statistical calibrations. We use these techniques to analyze RNA-Seq libraries from Arabidopsis thaliana and Drosophila melanogaster. The identified differential RNA processing events were consistent with RT–qPCR measurements and previous studies. The proposed toolkit is available from http://bioweb.me/rdiff and enables in-depth analyses of transcriptomes, with or without available isoform annotation. PMID:23585274

  2. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...

  3. UV-excess sources with a red/IR-counterpart: low-mass companions, debris disks and QSO selection

    CERN Document Server

    Verbeek, Kars; Scaringi, Simone; Casares, Jorge; Corral-Santana, Jesus M; Deacon, Niall; Drew, Janet E; Gänsicke, Boris T; González-Solares, Eduardo; Greimel, Robert; Heber, Ulrich; Napiwotzki, Ralf; Østensen, Roy H; Steeghs, Danny; Wright, Nicholas J; Zijlstra, Albert

    2013-01-01

    We present the result of the cross-matching between UV-excess sources selected from the UV-excess survey of the Northern Galactic Plane (UVEX) and several infrared surveys (2MASS, UKIDSS and WISE). From the position in the (J-H) vs. (H-K) colour-colour diagram we select UV-excess candidate white dwarfs with an M-dwarf type companion, candidates that might have a lower mass, brown-dwarf type companion, and candidates showing an infrared-excess only in the K-band, which might be due to a debris disk. Grids of reddened DA+dM and sdO+MS/sdB+MS model spectra are fitted to the U,g,r,i,z,J,H,K photometry in order to determine spectral types and estimate temperatures and reddening. From a sample of 964 hot candidate white dwarfs with (g-r)<0.2, the spectral energy distribution fitting shows that ~2-4% of the white dwarfs have an M-dwarf companion, ~2% have a lower-mass companion, and no clear candidates for having a debris disk are found. Additionally, from WISE 6 UV-excess sources are selected as candidate Quasi-...

  4. A 45-year-old man with excessive daytime somnolence, and witnessed apnea at altitude

    Directory of Open Access Journals (Sweden)

    Welsh CH

    2011-04-01

    Full Text Available A sleepy man without sleep apnea at 1609m (5280 feet had disturbed sleep at his home altitude of 3200m (10500 feet. In addition to common disruptors of sleep such as psychophysiologic insomnia, restless leg syndrome, alcohol and excessive caffeine use, central sleep apnea with periodic breathing can be a significant cause of disturbed sleep at altitude. In symptomatic patients living at altitude, a sleep study at their home altitude should be considered to accurately diagnose the presence and magnitude of sleep disordered breathing as sleep studies performed at lower altitudes may miss this diagnosis. Treatments options differ from those to treat obstructive apnea. Supplemental oxygen is considered by many to be first-line therapy.

  5. [Current research of the excessive lateral pressure syndrome of patellofemoral joint].

    Science.gov (United States)

    Liu, Jin-song; Zhang, Dao-ping

    2011-05-01

    As modern medicine getting deeply to understand ever-detailed anatomy,structure and animal mechanics of the patellofemoral joint, excessive lateral pressure syndrome, a very common patellofemoral disorder, has been reacquainted by the clinicians. On account to the complexity and variety of the etiology and the mechanism of the pain, still, there are many difficulties and arguments on the exact description of the clinical symptoms and the establishment of a universally accepted diagnostic criteria. Accurately grasping different causes, pathomechanisms and developmental stages of the disease would be especially important. As a result, rational choice of the pertinent procedures become the clinical lynchpin. This paper reviews domestic and international pertinent literatures in the past 10 years, and provide an overview of the latest study of anatomy, biomechanic, pathomechanism and clinical experience, anticipating to offer help on standardizing the diagnosis and treatment of ELPS.

  6. VizieR Online Data Catalog: UV-excess quasar candidates (Moreau+, 1995)

    Science.gov (United States)

    Moreau, O.; Reboul, H.

    1995-01-01

    We have developed a procedure (so called PAPA) for measurement of magnitudes (about 0.1mag accurate) and positions (with accuracy better than 0.5arcsec) of all the objects present on photographic plates digitised with the MAMA machine. This homogeneous procedure was applied to four Schmidt plates - in U, B and twice V - covering the Palomar-Sky-Survey field PS +30deg 13h00m, a 40-square-degree zone at the North Galactic Pole. A general-interest exhaustive tricolour catalogue of 19542 star-like objects down to V=20.0 has been produced and we selected 1681 quasar candidates on the basis of ultraviolet excess and, when possible, absence of any measurable proper motion. The astrometric and photometric catalogue of the candidates is given in electronic form. (4 data files).

  7. Eddy covariance observations of methane and nitrous oxide emissions: Towards more accurate estimates from ecosystems

    NARCIS (Netherlands)

    Kroon-van Loon, P.S.

    2010-01-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these e

  8. Improved Object Localization Using Accurate Distance Estimation in Wireless Multimedia Sensor Networks.

    Science.gov (United States)

    Ur Rehman, Yasar Abbas; Tariq, Muhammad; Khan, Omar Usman

    2015-01-01

    Object localization plays a key role in many popular applications of Wireless Multimedia Sensor Networks (WMSN) and as a result, it has acquired a significant status for the research community. A significant body of research performs this task without considering node orientation, object geometry and environmental variations. As a result, the localized object does not reflect the real world scenarios. In this paper, a novel object localization scheme for WMSN has been proposed that utilizes range free localization, computer vision, and principle component analysis based algorithms. The proposed approach provides the best possible approximation of distance between a wmsn sink and an object, and the orientation of the object using image based information. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft) when compared to other popular techniques.

  9. SCREENING TO IDENTIFY AND PREVENT URBAN STORM WATER PROBLEMS: ESTIMATING IMPERVIOUS AREA ACCURATELY AND INEXPENSIVELY

    Science.gov (United States)

    Complete identification and eventual prevention of urban water quality problems pose significant monitoring, "smart growth" and water quality management challenges. Uncontrolled increase of impervious surface area (roads, buildings, and parking lots) causes detrimental hydrologi...

  10. Rapid and accurate estimation of blood saturation, melanin content, and epidermis thickness from spectral diffuse reflectance.

    OpenAIRE

    2010-01-01

    We present a method to determine chromophore concentrations, blood saturation, and epidermal thickness of human skin from diffuse reflectance spectra. Human skin was approximated as a plane-parallel slab of variable thickness supported by a semi-infinite layer corresponding to the epidermis and dermis, respectively. The absorption coefficient was modeled as a function of melanin content for the epidermis and blood content and oxygen saturation for the dermis. The scattering coefficient and re...

  11. How to accurately estimate BH masses of AGN with double-peaked emission lines

    Directory of Open Access Journals (Sweden)

    Xue Guang Zhang

    2008-01-01

    Full Text Available Presentamos una nueva relación para determinar la masa virial del Agujero Negro central en Núcleos Activos de Galaxias con perfiles de doble pico en las líneas anchas de baja ionización. Se discute cuál es el parámetro adecuado para estimar la velocidad local de las regiones de emisión y la relación para estimar la distancia de éstas regiones a la fuente de ionización. Seleccionamos 17 objetos con perfiles de doble pico del SDSS y con líneas de absorción medibles para determinar las masas del hoyo negro mediante el método de dispersión de velocidades y comparar con nuestra determinación de masas viriales. Confirmamos un resultado previo (Zhang, Dultzin-Hacyan, & Wang 2007: que las relaciones para BLRs "normales" no son adecuadas para determinar las masas por el método virial en el caso de líneas anchas con doble pico.

  12. An Efficient and Accurate Method of Estimating Substrate Noise Coupling in Heavily Doped Substrates

    Science.gov (United States)

    2005-08-24

    αij values obtained from the original contact sizes. The αij calculated using the model is the same for both the cases. The error from this model is...substrate resistances in large circuits,” in Proc. European Design and Test Conference, March 1996, pp. 560-565. [5] E. Charbon , R. Gharpurey, P. Miliozzi...and E. Charbon , “Substrate coupling: modeling, simulation and design perspectives,” in Proc. of Quality Electronic Design, March 2004, pp. 283-290. [8

  13. Improved Object Localization Using Accurate Distance Estimation in Wireless Multimedia Sensor Networks.

    Directory of Open Access Journals (Sweden)

    Yasar Abbas Ur Rehman

    Full Text Available Object localization plays a key role in many popular applications of Wireless Multimedia Sensor Networks (WMSN and as a result, it has acquired a significant status for the research community. A significant body of research performs this task without considering node orientation, object geometry and environmental variations. As a result, the localized object does not reflect the real world scenarios. In this paper, a novel object localization scheme for WMSN has been proposed that utilizes range free localization, computer vision, and principle component analysis based algorithms. The proposed approach provides the best possible approximation of distance between a wmsn sink and an object, and the orientation of the object using image based information. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft when compared to other popular techniques.

  14. Interbank Market Structure and Accurate Estimation of an Aggregate Liquidity Shock

    OpenAIRE

    Isakov, A.

    2013-01-01

    It's customary among money market analysts to blame interest rate deviations from the Bank of Russia's target band on the market structure imperfections or segmentation. We isolate one form of such market imperfection and provide an illustration of its potential impact on central bank's open market operations efficiency in the current monetary policy framework. We then hypothesize that naive (market) structure-agnostic liquidity gap aggregation will lead to market demand underestimation in so...

  15. Error estimation and accurate mapping based ALE formulation for 3D simulation of friction stir welding

    OpenAIRE

    Guerdoux, Simon; Fourment, Lionel

    2007-01-01

    Reprinted with permission from AIP Conf. Proc. May 17, 2007 -- Volume 908, pp. 185-190 MATERIALS PROCESSING AND DESIGN; Modeling, Simulation and Applications; NUMIFORM '07; Proceedings of the 9th International Conference on Numerical Methods in Industrial Forming Processes; doi:10.1063/1.2740809 Copyright 2007 American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics; Interna...

  16. Does consideration of larger study areas yield more accurate estimates of air pollution health effects?

    DEFF Research Database (Denmark)

    Pedersen, Marie; Siroux, Valérie; Pin, Isabelle

    2013-01-01

    BACKGROUND: Spatially-resolved air pollution models can be developed in large areas. The resulting increased exposure contrasts and population size offer opportunities to better characterize the effect of atmospheric pollutants on respiratory health. However the heterogeneity of these areas may a...

  17. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  18. Equating accelerometer estimates among youth

    DEFF Research Database (Denmark)

    Brazendale, Keith; Beets, Michael W; Bornstein, Daniel B;

    2016-01-01

    OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across s...

  19. 78 FR 73817 - Information Collection; Federal Excess Personal Property (FEPP) and Firefighter Property (FFP...

    Science.gov (United States)

    2013-12-09

    ... Forest Service Information Collection; Federal Excess Personal Property (FEPP) and Firefighter Property... currently approved information collection, Federal Excess Personal Property (FEPP) and Firefighter Property... Friday. SUPPLEMENTARY INFORMATION: Title: Federal Excess Personal Property (FEPP) and...

  20. Studying the entropy excess and entropy excess ratio in (105,106,107)Pd within BCS model

    CERN Document Server

    Rahmatinejad, Azam; Razavi, Rohallah

    2015-01-01

    Pairing correlations and their influence on nuclear properties has been studied within BCS model. Using this theoretical model with inclusion of pairing interaction between nucleons, nuclear level density and entropy of Pd(105,106,107) have been extracted. The results well coincide with the empirical values of the nuclear level densities obtained by Oslo group. Then the entropy excess of Pd107 and Pd105 compared to Pd106 as a function of the temperature has been studied. Also the role of neutron and proton system in entropy excess have been investigated by the using of the entropy excess ratio proposed by Razavi et al. [R. Razavi, A.N. Behkami, S. Mohammadi, and M. Gholami, Phys. Rev. C 86, 047303 (2012)].